Jan 23 18:42:09.456565 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 15:50:57 -00 2026 Jan 23 18:42:09.456620 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:42:09.456662 kernel: BIOS-provided physical RAM map: Jan 23 18:42:09.456675 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 18:42:09.456685 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 18:42:09.456697 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 18:42:09.456710 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 18:42:09.456720 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 18:42:09.456752 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 18:42:09.456764 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 18:42:09.456799 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:42:09.456810 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 18:42:09.456822 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:42:09.456834 kernel: NX (Execute Disable) protection: active Jan 23 18:42:09.456848 kernel: APIC: Static calls initialized Jan 23 18:42:09.456882 kernel: SMBIOS 2.8 present. Jan 23 18:42:09.456912 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 18:42:09.456955 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:42:09.456966 kernel: Hypervisor detected: KVM Jan 23 18:42:09.456978 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 18:42:09.456990 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:42:09.457003 kernel: kvm-clock: using sched offset of 8439421809 cycles Jan 23 18:42:09.457016 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:42:09.457029 kernel: tsc: Detected 2445.424 MHz processor Jan 23 18:42:09.457066 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:42:09.457079 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:42:09.457092 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 18:42:09.457104 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 18:42:09.457116 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:42:09.457127 kernel: Using GB pages for direct mapping Jan 23 18:42:09.457139 kernel: ACPI: Early table checksum verification disabled Jan 23 18:42:09.457180 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 18:42:09.457194 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:42:09.457206 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:42:09.457218 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:42:09.457229 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 18:42:09.457241 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:42:09.457254 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:42:09.457356 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:42:09.457371 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:42:09.457411 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 18:42:09.457423 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 18:42:09.457436 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 18:42:09.457450 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 18:42:09.457484 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 18:42:09.457498 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 18:42:09.457511 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 18:42:09.457524 kernel: No NUMA configuration found Jan 23 18:42:09.457536 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 18:42:09.457550 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 18:42:09.457586 kernel: Zone ranges: Jan 23 18:42:09.457598 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:42:09.457612 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 18:42:09.457624 kernel: Normal empty Jan 23 18:42:09.457635 kernel: Device empty Jan 23 18:42:09.457648 kernel: Movable zone start for each node Jan 23 18:42:09.457661 kernel: Early memory node ranges Jan 23 18:42:09.457674 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 18:42:09.457716 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 18:42:09.457728 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 18:42:09.457740 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:42:09.457754 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 18:42:09.457783 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 18:42:09.457795 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:42:09.457807 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:42:09.457846 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:42:09.457860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:42:09.457890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:42:09.457903 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:42:09.457948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:42:09.457961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:42:09.457975 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:42:09.458015 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:42:09.458027 kernel: TSC deadline timer available Jan 23 18:42:09.458039 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:42:09.458055 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:42:09.458070 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:42:09.458083 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:42:09.458096 kernel: CPU topo: Num. cores per package: 4 Jan 23 18:42:09.458109 kernel: CPU topo: Num. threads per package: 4 Jan 23 18:42:09.458154 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 18:42:09.458168 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:42:09.458182 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:42:09.458195 kernel: kvm-guest: setup PV sched yield Jan 23 18:42:09.458208 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 18:42:09.458223 kernel: Booting paravirtualized kernel on KVM Jan 23 18:42:09.458237 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:42:09.458328 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 18:42:09.458345 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 18:42:09.458360 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 18:42:09.458373 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 18:42:09.458386 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:42:09.458400 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:42:09.458416 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:42:09.458462 kernel: random: crng init done Jan 23 18:42:09.458474 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:42:09.458486 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:42:09.458498 kernel: Fallback order for Node 0: 0 Jan 23 18:42:09.458510 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 18:42:09.458521 kernel: Policy zone: DMA32 Jan 23 18:42:09.458533 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:42:09.458571 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 18:42:09.458583 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:42:09.458595 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:42:09.458606 kernel: Dynamic Preempt: voluntary Jan 23 18:42:09.458618 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:42:09.458636 kernel: rcu: RCU event tracing is enabled. Jan 23 18:42:09.458648 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 18:42:09.458682 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:42:09.458709 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:42:09.458721 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:42:09.458734 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:42:09.458746 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 18:42:09.458757 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:42:09.458770 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:42:09.458781 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 18:42:09.458815 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 18:42:09.458827 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:42:09.458899 kernel: Console: colour VGA+ 80x25 Jan 23 18:42:09.458961 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:42:09.458973 kernel: ACPI: Core revision 20240827 Jan 23 18:42:09.458986 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:42:09.458998 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:42:09.459010 kernel: x2apic enabled Jan 23 18:42:09.459023 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:42:09.459073 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:42:09.459086 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:42:09.459098 kernel: kvm-guest: setup PV IPIs Jan 23 18:42:09.459110 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:42:09.459145 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 18:42:09.459157 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 23 18:42:09.459170 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:42:09.459183 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:42:09.459195 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:42:09.459207 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:42:09.459220 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:42:09.459253 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:42:09.459310 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:42:09.459323 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 18:42:09.459336 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 18:42:09.459349 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:42:09.459361 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 18:42:09.459404 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:42:09.459416 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:42:09.459429 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:42:09.459442 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:42:09.459454 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:42:09.459466 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:42:09.459479 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 18:42:09.459513 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:42:09.459525 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:42:09.459538 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:42:09.459550 kernel: landlock: Up and running. Jan 23 18:42:09.459563 kernel: SELinux: Initializing. Jan 23 18:42:09.459575 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:42:09.459588 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:42:09.459616 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 18:42:09.459649 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 18:42:09.459661 kernel: signal: max sigframe size: 1776 Jan 23 18:42:09.459674 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:42:09.459686 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:42:09.459699 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:42:09.459711 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:42:09.459724 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:42:09.459757 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:42:09.459770 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 18:42:09.459782 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 18:42:09.459795 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 23 18:42:09.459808 kernel: Memory: 2445296K/2571752K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15532K init, 2508K bss, 120520K reserved, 0K cma-reserved) Jan 23 18:42:09.459820 kernel: devtmpfs: initialized Jan 23 18:42:09.459833 kernel: x86/mm: Memory block size: 128MB Jan 23 18:42:09.459866 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:42:09.459879 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 18:42:09.459891 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:42:09.459904 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:42:09.459945 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:42:09.459959 kernel: audit: type=2000 audit(1769193722.831:1): state=initialized audit_enabled=0 res=1 Jan 23 18:42:09.459971 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:42:09.460005 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:42:09.460018 kernel: cpuidle: using governor menu Jan 23 18:42:09.460031 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:42:09.460043 kernel: dca service started, version 1.12.1 Jan 23 18:42:09.460056 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 18:42:09.460068 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 18:42:09.460081 kernel: PCI: Using configuration type 1 for base access Jan 23 18:42:09.460115 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:42:09.460128 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:42:09.460140 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:42:09.460153 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:42:09.460165 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:42:09.460177 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:42:09.460190 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:42:09.460222 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:42:09.460235 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:42:09.460247 kernel: ACPI: Interpreter enabled Jan 23 18:42:09.460300 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:42:09.460317 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:42:09.460330 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:42:09.460342 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:42:09.460380 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:42:09.460394 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:42:09.460751 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:42:09.461107 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:42:09.461469 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:42:09.461489 kernel: PCI host bridge to bus 0000:00 Jan 23 18:42:09.461802 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:42:09.462106 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:42:09.462431 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:42:09.462685 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 18:42:09.462974 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 18:42:09.463231 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 18:42:09.463600 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:42:09.463960 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:42:09.464405 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:42:09.464754 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 18:42:09.465064 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 18:42:09.465455 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 18:42:09.465726 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:42:09.466054 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 18:42:09.466404 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 18:42:09.466733 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 18:42:09.467043 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 18:42:09.467470 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 18:42:09.467742 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 18:42:09.468076 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 18:42:09.468450 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 18:42:09.469820 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:42:09.472705 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 18:42:09.484657 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 18:42:09.494131 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 18:42:09.498060 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 18:42:09.499094 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:42:09.540886 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:42:09.543009 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:42:09.545100 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 18:42:09.568610 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 18:42:09.570815 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:42:09.585155 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 18:42:09.585218 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:42:09.585229 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:42:09.585242 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:42:09.585341 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:42:09.585377 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:42:09.585398 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:42:09.585420 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:42:09.585984 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:42:09.585997 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:42:09.586022 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:42:09.586030 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:42:09.586038 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:42:09.586046 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:42:09.587557 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:42:09.587612 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:42:09.587670 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:42:09.587678 kernel: iommu: Default domain type: Translated Jan 23 18:42:09.587686 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:42:09.587694 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:42:09.587702 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:42:09.587725 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 18:42:09.587733 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 18:42:09.594169 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:42:09.606072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:42:09.608418 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:42:09.608809 kernel: vgaarb: loaded Jan 23 18:42:09.608822 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:42:09.608831 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:42:09.611963 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:42:09.611981 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:42:09.611993 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:42:09.612005 kernel: pnp: PnP ACPI init Jan 23 18:42:09.627976 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 18:42:09.628037 kernel: pnp: PnP ACPI: found 6 devices Jan 23 18:42:09.628048 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:42:09.628096 kernel: NET: Registered PF_INET protocol family Jan 23 18:42:09.628105 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:42:09.628112 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:42:09.628136 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:42:09.628144 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:42:09.628152 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:42:09.628160 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:42:09.628190 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:42:09.628198 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:42:09.628206 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:42:09.628214 kernel: NET: Registered PF_XDP protocol family Jan 23 18:42:09.628480 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:42:09.633623 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:42:09.636690 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:42:09.699962 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 18:42:09.705325 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 18:42:09.705625 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 18:42:09.705666 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:42:09.705706 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 18:42:09.705731 kernel: Initialise system trusted keyrings Jan 23 18:42:09.705881 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:42:09.705890 kernel: Key type asymmetric registered Jan 23 18:42:09.705977 kernel: Asymmetric key parser 'x509' registered Jan 23 18:42:09.706045 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:42:09.706061 kernel: io scheduler mq-deadline registered Jan 23 18:42:09.718349 kernel: io scheduler kyber registered Jan 23 18:42:09.718491 kernel: io scheduler bfq registered Jan 23 18:42:09.718510 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:42:09.718724 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:42:09.718740 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:42:09.718772 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 18:42:09.718784 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:42:09.718796 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:42:09.718807 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:42:09.718819 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:42:09.718878 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:42:09.719702 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 18:42:09.719783 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 23 18:42:09.720161 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 18:42:09.720635 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T18:42:06 UTC (1769193726) Jan 23 18:42:09.720998 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 18:42:09.721056 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:42:09.721071 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:42:09.721085 kernel: Segment Routing with IPv6 Jan 23 18:42:09.721099 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:42:09.721113 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:42:09.721128 kernel: Key type dns_resolver registered Jan 23 18:42:09.721162 kernel: IPI shorthand broadcast: enabled Jan 23 18:42:09.721207 kernel: sched_clock: Marking stable (3028177276, 624188833)->(3813506280, -161140171) Jan 23 18:42:09.721237 kernel: registered taskstats version 1 Jan 23 18:42:09.721252 kernel: Loading compiled-in X.509 certificates Jan 23 18:42:09.721316 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed4528912f8413ae803010e63385bcf7ed197cf1' Jan 23 18:42:09.721461 kernel: Demotion targets for Node 0: null Jan 23 18:42:09.721482 kernel: Key type .fscrypt registered Jan 23 18:42:09.721496 kernel: Key type fscrypt-provisioning registered Jan 23 18:42:09.721561 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:42:09.721579 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:42:09.721618 kernel: ima: No architecture policies found Jan 23 18:42:09.721632 kernel: clk: Disabling unused clocks Jan 23 18:42:09.721645 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 23 18:42:09.721658 kernel: Write protecting the kernel read-only data: 47104k Jan 23 18:42:09.721673 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 23 18:42:09.721725 kernel: Run /init as init process Jan 23 18:42:09.721740 kernel: with arguments: Jan 23 18:42:09.721780 kernel: /init Jan 23 18:42:09.721794 kernel: with environment: Jan 23 18:42:09.721806 kernel: HOME=/ Jan 23 18:42:09.721818 kernel: TERM=linux Jan 23 18:42:09.721831 kernel: hrtimer: interrupt took 6233901 ns Jan 23 18:42:09.721844 kernel: SCSI subsystem initialized Jan 23 18:42:09.721900 kernel: libata version 3.00 loaded. Jan 23 18:42:09.722394 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:42:09.722477 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:42:09.723028 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:42:09.723463 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:42:09.723782 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:42:09.724493 kernel: scsi host0: ahci Jan 23 18:42:09.724831 kernel: scsi host1: ahci Jan 23 18:42:09.725237 kernel: scsi host2: ahci Jan 23 18:42:09.725825 kernel: scsi host3: ahci Jan 23 18:42:09.726308 kernel: scsi host4: ahci Jan 23 18:42:09.726679 kernel: scsi host5: ahci Jan 23 18:42:09.726701 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 23 18:42:09.726716 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 23 18:42:09.726729 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 23 18:42:09.726743 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 23 18:42:09.726756 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 23 18:42:09.726807 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 23 18:42:09.726821 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:42:09.726834 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:42:09.726847 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:42:09.726860 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:42:09.726873 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 18:42:09.726886 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:42:09.726899 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 18:42:09.726964 kernel: ata3.00: applying bridge limits Jan 23 18:42:09.726978 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:42:09.726991 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 18:42:09.727004 kernel: ata3.00: configured for UDMA/100 Jan 23 18:42:09.727425 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 18:42:09.727762 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 18:42:09.728167 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 23 18:42:09.728189 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:42:09.728205 kernel: GPT:16515071 != 27000831 Jan 23 18:42:09.728253 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:42:09.728323 kernel: GPT:16515071 != 27000831 Jan 23 18:42:09.728336 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:42:09.728349 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 18:42:09.728704 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 18:42:09.728724 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:42:09.729078 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 18:42:09.729100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:42:09.729113 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:42:09.729126 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:42:09.729182 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 23 18:42:09.729197 kernel: raid6: avx2x4 gen() 36153 MB/s Jan 23 18:42:09.729211 kernel: raid6: avx2x2 gen() 30467 MB/s Jan 23 18:42:09.729224 kernel: raid6: avx2x1 gen() 19775 MB/s Jan 23 18:42:09.729237 kernel: raid6: using algorithm avx2x4 gen() 36153 MB/s Jan 23 18:42:09.729250 kernel: raid6: .... xor() 4234 MB/s, rmw enabled Jan 23 18:42:09.729317 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:42:09.729333 kernel: xor: automatically using best checksumming function avx Jan 23 18:42:09.729385 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:42:09.729402 kernel: BTRFS: device fsid ae5f9861-c401-42b4-99c9-2e3fe0b343c2 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 23 18:42:09.729450 kernel: BTRFS info (device dm-0): first mount of filesystem ae5f9861-c401-42b4-99c9-2e3fe0b343c2 Jan 23 18:42:09.729464 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:42:09.729510 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:42:09.729525 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:42:09.729543 kernel: loop: module loaded Jan 23 18:42:09.729557 kernel: loop0: detected capacity change from 0 to 100560 Jan 23 18:42:09.729570 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:42:09.729607 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:42:09.729654 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:42:09.729670 systemd[1]: Detected virtualization kvm. Jan 23 18:42:09.729684 systemd[1]: Detected architecture x86-64. Jan 23 18:42:09.729697 systemd[1]: Running in initrd. Jan 23 18:42:09.729711 systemd[1]: No hostname configured, using default hostname. Jan 23 18:42:09.729725 systemd[1]: Hostname set to . Jan 23 18:42:09.729769 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:42:09.729784 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:42:09.729798 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:42:09.729812 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:42:09.729826 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:42:09.729841 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:42:09.729855 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:42:09.729906 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:42:09.729957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:42:09.729977 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:42:09.729993 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:42:09.730007 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:42:09.730055 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:42:09.730069 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:42:09.730084 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:42:09.730100 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:42:09.730117 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:42:09.730133 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:42:09.730150 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:42:09.730197 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:42:09.730214 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:42:09.730230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:42:09.730244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:42:09.730312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:42:09.730331 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:42:09.730347 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:42:09.730394 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:42:09.730411 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:42:09.730424 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:42:09.730442 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:42:09.730459 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:42:09.730473 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:42:09.730485 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:42:09.730524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:42:09.730538 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:42:09.730552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:42:09.730591 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:42:09.730606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:42:09.730711 systemd-journald[318]: Collecting audit messages is enabled. Jan 23 18:42:09.730777 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:42:09.730791 kernel: Bridge firewalling registered Jan 23 18:42:09.730804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:42:09.730819 kernel: audit: type=1130 audit(1769193729.718:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.730832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:42:09.730846 systemd-journald[318]: Journal started Jan 23 18:42:09.730964 systemd-journald[318]: Runtime Journal (/run/log/journal/ea528643fe06434abd3123b8ffbc8b5f) is 6M, max 48.2M, 42.1M free. Jan 23 18:42:09.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.606144 systemd-modules-load[320]: Inserted module 'br_netfilter' Jan 23 18:42:09.734782 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:42:09.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.744336 kernel: audit: type=1130 audit(1769193729.733:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.744550 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:42:09.872401 kernel: audit: type=1130 audit(1769193729.747:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.756059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:42:09.882895 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:42:09.899559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:42:09.911688 kernel: audit: type=1130 audit(1769193729.900:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.909100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:42:09.925686 systemd-tmpfiles[336]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:42:09.930711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:42:09.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.938318 kernel: audit: type=1130 audit(1769193729.931:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.942829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:42:09.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.949685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:42:09.959838 kernel: audit: type=1130 audit(1769193729.945:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.959861 kernel: audit: type=1334 audit(1769193729.948:8): prog-id=6 op=LOAD Jan 23 18:42:09.948000 audit: BPF prog-id=6 op=LOAD Jan 23 18:42:09.961093 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:42:09.972310 kernel: audit: type=1130 audit(1769193729.961:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.977026 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:42:10.000883 kernel: audit: type=1130 audit(1769193729.977:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:09.981533 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:42:10.026864 dracut-cmdline[357]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ee2a61adbfdca0d8850a6d1564f6a5daa8e67e4645be01ed76a79270fe7c1051 Jan 23 18:42:10.275775 systemd-resolved[347]: Positive Trust Anchors: Jan 23 18:42:10.275819 systemd-resolved[347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:42:10.275825 systemd-resolved[347]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:42:10.275864 systemd-resolved[347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:42:10.346094 systemd-resolved[347]: Defaulting to hostname 'linux'. Jan 23 18:42:10.349489 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:42:10.360746 kernel: audit: type=1130 audit(1769193730.349:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:10.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:10.351498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:42:10.459391 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:42:10.488873 kernel: iscsi: registered transport (tcp) Jan 23 18:42:10.525725 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:42:10.525855 kernel: QLogic iSCSI HBA Driver Jan 23 18:42:10.566601 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:42:10.599645 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:42:10.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:10.612564 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:42:10.695043 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:42:10.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:10.703896 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:42:10.710766 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:42:10.879680 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:42:10.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:10.885000 audit: BPF prog-id=7 op=LOAD Jan 23 18:42:10.885000 audit: BPF prog-id=8 op=LOAD Jan 23 18:42:10.886767 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:42:10.930705 systemd-udevd[594]: Using default interface naming scheme 'v257'. Jan 23 18:42:10.949103 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:42:10.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:10.957887 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:42:11.009119 dracut-pre-trigger[652]: rd.md=0: removing MD RAID activation Jan 23 18:42:11.027648 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:42:11.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:11.035000 audit: BPF prog-id=9 op=LOAD Jan 23 18:42:11.036979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:42:11.058409 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:42:11.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:11.067246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:42:11.120807 systemd-networkd[722]: lo: Link UP Jan 23 18:42:11.120835 systemd-networkd[722]: lo: Gained carrier Jan 23 18:42:11.122312 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:42:11.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:11.126017 systemd[1]: Reached target network.target - Network. Jan 23 18:42:11.243094 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:42:11.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:11.252579 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:42:11.320344 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 18:42:11.335517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:42:11.372218 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 18:42:11.393240 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 18:42:11.396088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:42:11.417340 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:42:11.449308 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 18:42:11.452478 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:42:11.456637 kernel: AES CTR mode by8 optimization enabled Jan 23 18:42:11.452700 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:42:11.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:11.458118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:42:11.468100 disk-uuid[771]: Primary Header is updated. Jan 23 18:42:11.468100 disk-uuid[771]: Secondary Entries is updated. Jan 23 18:42:11.468100 disk-uuid[771]: Secondary Header is updated. Jan 23 18:42:11.483831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:42:11.486866 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:42:11.486871 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:42:11.488906 systemd-networkd[722]: eth0: Link UP Jan 23 18:42:11.489186 systemd-networkd[722]: eth0: Gained carrier Jan 23 18:42:11.489196 systemd-networkd[722]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:42:11.513055 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:42:11.757461 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:42:11.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:11.788624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:42:11.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:11.797008 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:42:11.803730 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:42:11.810798 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:42:11.818740 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:42:11.852972 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:42:11.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:12.682835 disk-uuid[793]: Warning: The kernel is still using the old partition table. Jan 23 18:42:12.682835 disk-uuid[793]: The new table will be used at the next reboot or after you Jan 23 18:42:12.682835 disk-uuid[793]: run partprobe(8) or kpartx(8) Jan 23 18:42:12.682835 disk-uuid[793]: The operation has completed successfully. Jan 23 18:42:12.703251 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:42:12.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:12.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:12.703499 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:42:12.706826 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:42:12.746303 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Jan 23 18:42:12.752173 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:42:12.752212 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:42:12.759243 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:42:12.759500 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:42:12.770335 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:42:12.772419 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:42:12.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:12.777006 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:42:12.780455 systemd-networkd[722]: eth0: Gained IPv6LL Jan 23 18:42:12.915235 ignition[886]: Ignition 2.24.0 Jan 23 18:42:12.915388 ignition[886]: Stage: fetch-offline Jan 23 18:42:12.915475 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:42:12.915497 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:42:12.915614 ignition[886]: parsed url from cmdline: "" Jan 23 18:42:12.915621 ignition[886]: no config URL provided Jan 23 18:42:12.915766 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:42:12.915790 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:42:12.915850 ignition[886]: op(1): [started] loading QEMU firmware config module Jan 23 18:42:12.915857 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 18:42:12.926727 ignition[886]: op(1): [finished] loading QEMU firmware config module Jan 23 18:42:13.089188 ignition[886]: parsing config with SHA512: 5efcebda6763afb4c0f5ceae7448a0d3e9275b103eb12778ac10c630e3273b500e8f456fec98dee12fcc3ffe18c7f3afcaea2fa1d0e6736780ee9f4cac6c3b21 Jan 23 18:42:13.092889 unknown[886]: fetched base config from "system" Jan 23 18:42:13.093455 unknown[886]: fetched user config from "qemu" Jan 23 18:42:13.094807 ignition[886]: fetch-offline: fetch-offline passed Jan 23 18:42:13.094879 ignition[886]: Ignition finished successfully Jan 23 18:42:13.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:13.098073 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:42:13.102670 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 18:42:13.104024 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:42:13.156719 ignition[896]: Ignition 2.24.0 Jan 23 18:42:13.156756 ignition[896]: Stage: kargs Jan 23 18:42:13.157119 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:42:13.157147 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:42:13.158777 ignition[896]: kargs: kargs passed Jan 23 18:42:13.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:13.164052 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:42:13.158852 ignition[896]: Ignition finished successfully Jan 23 18:42:13.169775 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:42:13.211871 ignition[903]: Ignition 2.24.0 Jan 23 18:42:13.211919 ignition[903]: Stage: disks Jan 23 18:42:13.212152 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:42:13.212171 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:42:13.214556 ignition[903]: disks: disks passed Jan 23 18:42:13.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:13.219850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:42:13.214613 ignition[903]: Ignition finished successfully Jan 23 18:42:13.221240 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:42:13.228761 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:42:13.229912 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:42:13.236781 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:42:13.241382 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:42:13.247516 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:42:13.294484 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 23 18:42:13.300912 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:42:13.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:13.310149 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:42:13.438347 kernel: EXT4-fs (vda9): mounted filesystem eebf2bdd-2461-4b18-9f37-721daf86511d r/w with ordered data mode. Quota mode: none. Jan 23 18:42:13.439257 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:42:13.441707 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:42:13.447513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:42:13.453375 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:42:13.455447 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:42:13.455496 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:42:13.455521 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:42:13.485782 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:42:13.504250 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Jan 23 18:42:13.504360 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:42:13.504417 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:42:13.504440 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:42:13.504461 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:42:13.494622 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:42:13.508653 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:42:13.723647 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:42:13.726766 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:42:13.733789 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:42:13.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:13.753737 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:42:13.758401 kernel: BTRFS info (device vda6): last unmount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:42:13.775479 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:42:13.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:13.795490 ignition[1019]: INFO : Ignition 2.24.0 Jan 23 18:42:13.795490 ignition[1019]: INFO : Stage: mount Jan 23 18:42:13.799130 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:42:13.799130 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:42:13.799130 ignition[1019]: INFO : mount: mount passed Jan 23 18:42:13.799130 ignition[1019]: INFO : Ignition finished successfully Jan 23 18:42:13.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:13.807190 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:42:13.813165 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:42:13.847182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:42:13.893338 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1030) Jan 23 18:42:13.893372 kernel: BTRFS info (device vda6): first mount of filesystem 65a96faf-6d02-485d-b2fc-84eb49ece660 Jan 23 18:42:13.902351 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:42:13.913427 kernel: BTRFS info (device vda6): turning on async discard Jan 23 18:42:13.913453 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 18:42:13.915560 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:42:13.952755 ignition[1047]: INFO : Ignition 2.24.0 Jan 23 18:42:13.952755 ignition[1047]: INFO : Stage: files Jan 23 18:42:13.957249 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:42:13.957249 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:42:13.957249 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:42:13.957249 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:42:13.957249 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:42:13.976667 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:42:13.981666 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:42:13.981666 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:42:13.977860 unknown[1047]: wrote ssh authorized keys file for user: core Jan 23 18:42:13.992758 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 18:42:13.992758 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 18:42:14.042232 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:42:14.153562 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 18:42:14.153562 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:42:14.163687 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:42:14.212156 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:42:14.212156 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:42:14.212156 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 18:42:14.459366 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:42:15.410215 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 18:42:15.410215 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:42:15.421644 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:42:15.421644 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:42:15.421644 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:42:15.421644 ignition[1047]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 18:42:15.421644 ignition[1047]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:42:15.421644 ignition[1047]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 18:42:15.421644 ignition[1047]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 18:42:15.421644 ignition[1047]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 18:42:15.462003 ignition[1047]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:42:15.467591 ignition[1047]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 18:42:15.472634 ignition[1047]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 18:42:15.472634 ignition[1047]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:42:15.472634 ignition[1047]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:42:15.485160 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:42:15.485160 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:42:15.485160 ignition[1047]: INFO : files: files passed Jan 23 18:42:15.485160 ignition[1047]: INFO : Ignition finished successfully Jan 23 18:42:15.509661 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 23 18:42:15.509687 kernel: audit: type=1130 audit(1769193735.486:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.486206 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:42:15.489453 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:42:15.516057 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:42:15.519827 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:42:15.538678 kernel: audit: type=1130 audit(1769193735.522:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.538735 kernel: audit: type=1131 audit(1769193735.522:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.519978 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:42:15.549368 initrd-setup-root-after-ignition[1078]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 18:42:15.556668 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:42:15.560550 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:42:15.560550 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:42:15.570153 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:42:15.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.576451 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:42:15.589309 kernel: audit: type=1130 audit(1769193735.575:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.586073 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:42:15.680679 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:42:15.680904 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:42:15.699006 kernel: audit: type=1130 audit(1769193735.686:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.699030 kernel: audit: type=1131 audit(1769193735.686:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.686886 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:42:15.703407 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:42:15.708577 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:42:15.711024 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:42:15.771751 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:42:15.784429 kernel: audit: type=1130 audit(1769193735.771:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.774304 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:42:15.802462 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:42:15.802703 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:42:15.808359 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:42:15.814386 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:42:15.819473 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:42:15.819599 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:42:15.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.827471 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:42:15.844483 kernel: audit: type=1131 audit(1769193735.824:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:15.836150 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:42:15.845068 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:42:15.854217 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:42:15.859842 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:42:15.862187 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:42:15.876294 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:42:15.883143 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:42:15.887666 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:42:15.896405 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:42:15.925019 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:42:16.075796 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:42:16.240490 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:42:16.247207 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:42:16.259180 kernel: audit: type=1131 audit(1769193736.246:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.254967 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:42:16.260251 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:42:16.260494 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:42:16.291126 kernel: audit: type=1131 audit(1769193736.272:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.266000 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:42:16.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.266196 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:42:16.291455 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:42:16.291699 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:42:16.298028 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:42:16.300866 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:42:16.304333 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:42:16.321905 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:42:16.324513 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:42:16.325982 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:42:16.326154 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:42:16.330185 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:42:16.330351 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:42:16.334074 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 18:42:16.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.334165 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:42:16.339074 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:42:16.339251 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:42:16.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.347607 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:42:16.347730 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:42:16.354219 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:42:16.356063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:42:16.356196 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:42:16.368548 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:42:16.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.372800 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:42:16.372964 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:42:16.379083 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:42:16.379192 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:42:16.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.388376 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:42:16.388578 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:42:16.404414 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:42:16.404548 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:42:16.447995 ignition[1104]: INFO : Ignition 2.24.0 Jan 23 18:42:16.447995 ignition[1104]: INFO : Stage: umount Jan 23 18:42:16.451983 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:42:16.451983 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 18:42:16.451983 ignition[1104]: INFO : umount: umount passed Jan 23 18:42:16.451983 ignition[1104]: INFO : Ignition finished successfully Jan 23 18:42:16.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.453434 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:42:16.454176 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:42:16.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.454344 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:42:16.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.456555 systemd[1]: Stopped target network.target - Network. Jan 23 18:42:16.461880 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:42:16.461989 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:42:16.467239 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:42:16.467357 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:42:16.474010 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:42:16.474079 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:42:16.492989 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:42:16.493148 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:42:16.531759 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:42:16.536806 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:42:16.624260 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:42:16.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.624477 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:42:16.633101 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:42:16.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.633244 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:42:16.642714 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:42:16.642874 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:42:16.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.650000 audit: BPF prog-id=6 op=UNLOAD Jan 23 18:42:16.650000 audit: BPF prog-id=9 op=UNLOAD Jan 23 18:42:16.652024 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:42:16.654903 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:42:16.654994 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:42:16.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.660416 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:42:16.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.660479 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:42:16.666309 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:42:16.670169 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:42:16.670232 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:42:16.671379 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:42:16.671436 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:42:16.671715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:42:16.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.671764 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:42:16.672190 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:42:16.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.705522 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:42:16.705756 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:42:16.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.707128 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:42:16.707186 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:42:16.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.713999 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:42:16.714051 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:42:16.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.720465 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:42:16.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.720559 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:42:16.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.726035 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:42:16.726093 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:42:16.733413 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:42:16.733483 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:42:16.740814 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:42:16.742163 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:42:16.742224 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:42:16.747200 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:42:16.747303 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:42:16.752881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:42:16.752988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:42:16.798051 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:42:16.798211 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:42:16.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.832823 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:42:16.833061 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:42:16.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:16.838141 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:42:16.841103 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:42:16.871729 systemd[1]: Switching root. Jan 23 18:42:16.923970 systemd-journald[318]: Journal stopped Jan 23 18:42:19.168790 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Jan 23 18:42:19.168887 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:42:19.168916 kernel: SELinux: policy capability open_perms=1 Jan 23 18:42:19.168988 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:42:19.169010 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:42:19.169028 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:42:19.169047 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:42:19.169065 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:42:19.169084 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:42:19.169125 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:42:19.169149 systemd[1]: Successfully loaded SELinux policy in 201.864ms. Jan 23 18:42:19.169185 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.059ms. Jan 23 18:42:19.169206 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:42:19.169226 systemd[1]: Detected virtualization kvm. Jan 23 18:42:19.169247 systemd[1]: Detected architecture x86-64. Jan 23 18:42:19.169315 systemd[1]: Detected first boot. Jan 23 18:42:19.169336 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 23 18:42:19.169355 zram_generator::config[1150]: No configuration found. Jan 23 18:42:19.169396 kernel: Guest personality initialized and is inactive Jan 23 18:42:19.169434 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:42:19.169453 kernel: Initialized host personality Jan 23 18:42:19.169471 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:42:19.169489 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:42:19.169508 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:42:19.169530 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:42:19.169556 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:42:19.169591 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:42:19.169620 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:42:19.169648 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:42:19.169680 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:42:19.169710 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:42:19.169740 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:42:19.169760 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:42:19.169778 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:42:19.169797 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:42:19.169817 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:42:19.169836 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:42:19.169855 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:42:19.169880 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:42:19.169898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:42:19.169917 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:42:19.169971 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:42:19.169993 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:42:19.170013 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:42:19.170036 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:42:19.170055 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:42:19.170074 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:42:19.170093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:42:19.170112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:42:19.170131 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 18:42:19.170150 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:42:19.170195 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:42:19.170215 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:42:19.170235 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:42:19.170253 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:42:19.170331 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 18:42:19.170352 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 18:42:19.170372 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:42:19.170395 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 18:42:19.170436 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 18:42:19.170457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:42:19.170476 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:42:19.170495 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:42:19.170514 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:42:19.170534 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:42:19.170572 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:42:19.170600 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:42:19.170625 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:42:19.170654 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:42:19.170689 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:42:19.170718 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:42:19.170739 systemd[1]: Reached target machines.target - Containers. Jan 23 18:42:19.170761 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:42:19.170780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:42:19.170799 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:42:19.170840 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:42:19.170876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:42:19.170895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:42:19.170915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:42:19.170969 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:42:19.170991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:42:19.171013 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:42:19.171037 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:42:19.171057 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:42:19.171076 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:42:19.171096 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:42:19.171115 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:42:19.171128 kernel: fuse: init (API version 7.41) Jan 23 18:42:19.171141 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:42:19.171157 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:42:19.171171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:42:19.171184 kernel: ACPI: bus type drm_connector registered Jan 23 18:42:19.171196 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:42:19.171208 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:42:19.171221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:42:19.171235 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:42:19.171250 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:42:19.171307 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:42:19.171321 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:42:19.171375 systemd-journald[1236]: Collecting audit messages is enabled. Jan 23 18:42:19.171422 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:42:19.171441 systemd-journald[1236]: Journal started Jan 23 18:42:19.171462 systemd-journald[1236]: Runtime Journal (/run/log/journal/ea528643fe06434abd3123b8ffbc8b5f) is 6M, max 48.2M, 42.1M free. Jan 23 18:42:18.740000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 18:42:19.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.071000 audit: BPF prog-id=14 op=UNLOAD Jan 23 18:42:19.071000 audit: BPF prog-id=13 op=UNLOAD Jan 23 18:42:19.079000 audit: BPF prog-id=15 op=LOAD Jan 23 18:42:19.082000 audit: BPF prog-id=16 op=LOAD Jan 23 18:42:19.085000 audit: BPF prog-id=17 op=LOAD Jan 23 18:42:19.166000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 18:42:19.166000 audit[1236]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffcb2b74e20 a2=4000 a3=0 items=0 ppid=1 pid=1236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:19.166000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 18:42:18.423357 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:42:18.444809 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 18:42:18.445841 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:42:18.446483 systemd[1]: systemd-journald.service: Consumed 1.274s CPU time. Jan 23 18:42:19.182888 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:42:19.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.185227 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:42:19.192082 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:42:19.194887 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:42:19.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.198350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:42:19.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.201774 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:42:19.202079 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:42:19.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.205614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:42:19.205884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:42:19.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.209092 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:42:19.209400 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:42:19.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.212730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:42:19.213019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:42:19.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.216623 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:42:19.216878 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:42:19.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.220808 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:42:19.221094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:42:19.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.225239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:42:19.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.229411 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:42:19.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.234930 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:42:19.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.239005 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:42:19.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.255839 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:42:19.260341 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 18:42:19.265380 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:42:19.269364 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:42:19.272225 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:42:19.272388 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:42:19.276031 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:42:19.280467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:42:19.280736 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:42:19.288698 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:42:19.292833 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:42:19.295781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:42:19.298361 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:42:19.299996 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:42:19.302477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:42:19.306571 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:42:19.310782 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:42:19.317608 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:42:19.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.322067 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:42:19.324064 systemd-journald[1236]: Time spent on flushing to /var/log/journal/ea528643fe06434abd3123b8ffbc8b5f is 33.318ms for 1094 entries. Jan 23 18:42:19.324064 systemd-journald[1236]: System Journal (/var/log/journal/ea528643fe06434abd3123b8ffbc8b5f) is 8M, max 163.5M, 155.5M free. Jan 23 18:42:19.378849 systemd-journald[1236]: Received client request to flush runtime journal. Jan 23 18:42:19.378936 kernel: loop1: detected capacity change from 0 to 224512 Jan 23 18:42:19.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.327685 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:42:19.348653 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:42:19.352214 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:42:19.357036 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:42:19.361463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:42:19.380881 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:42:19.386332 kernel: loop2: detected capacity change from 0 to 50784 Jan 23 18:42:19.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.841254 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:42:19.847191 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:42:19.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.852405 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:42:19.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.858000 audit: BPF prog-id=18 op=LOAD Jan 23 18:42:19.858000 audit: BPF prog-id=19 op=LOAD Jan 23 18:42:19.858000 audit: BPF prog-id=20 op=LOAD Jan 23 18:42:19.860397 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 18:42:19.864000 audit: BPF prog-id=21 op=LOAD Jan 23 18:42:19.867451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:42:19.873516 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:42:19.877521 kernel: loop3: detected capacity change from 0 to 111560 Jan 23 18:42:19.893000 audit: BPF prog-id=22 op=LOAD Jan 23 18:42:19.894000 audit: BPF prog-id=23 op=LOAD Jan 23 18:42:19.894000 audit: BPF prog-id=24 op=LOAD Jan 23 18:42:19.895478 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 18:42:19.899000 audit: BPF prog-id=25 op=LOAD Jan 23 18:42:19.899000 audit: BPF prog-id=26 op=LOAD Jan 23 18:42:19.899000 audit: BPF prog-id=27 op=LOAD Jan 23 18:42:19.903482 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:42:19.919569 kernel: loop4: detected capacity change from 0 to 224512 Jan 23 18:42:19.924346 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 23 18:42:19.924376 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 23 18:42:19.931388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:42:19.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.941340 kernel: loop5: detected capacity change from 0 to 50784 Jan 23 18:42:19.958598 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:42:19.962850 kernel: loop6: detected capacity change from 0 to 111560 Jan 23 18:42:19.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:19.974574 (sd-merge)[1296]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 23 18:42:19.980249 (sd-merge)[1296]: Merged extensions into '/usr'. Jan 23 18:42:19.985003 systemd-nsresourced[1293]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 18:42:19.991763 systemd[1]: Reload requested from client PID 1270 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:42:19.991818 systemd[1]: Reloading... Jan 23 18:42:20.244346 zram_generator::config[1339]: No configuration found. Jan 23 18:42:20.308100 systemd-resolved[1290]: Positive Trust Anchors: Jan 23 18:42:20.308120 systemd-resolved[1290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:42:20.308126 systemd-resolved[1290]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 18:42:20.308153 systemd-resolved[1290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:42:20.313935 systemd-resolved[1290]: Defaulting to hostname 'linux'. Jan 23 18:42:20.315062 systemd-oomd[1289]: No swap; memory pressure usage will be degraded Jan 23 18:42:20.534486 systemd[1]: Reloading finished in 541 ms. Jan 23 18:42:20.561121 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 18:42:20.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.565398 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:42:20.566633 kernel: kauditd_printk_skb: 97 callbacks suppressed Jan 23 18:42:20.566683 kernel: audit: type=1130 audit(1769193740.564:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.576425 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 18:42:20.587388 kernel: audit: type=1130 audit(1769193740.575:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.590745 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:42:20.598379 kernel: audit: type=1130 audit(1769193740.589:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.601202 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:42:20.607375 kernel: audit: type=1130 audit(1769193740.600:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.615913 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:42:20.617318 kernel: audit: type=1130 audit(1769193740.610:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:20.636722 systemd[1]: Starting ensure-sysext.service... Jan 23 18:42:20.640189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:42:20.643000 audit: BPF prog-id=8 op=UNLOAD Jan 23 18:42:20.647355 kernel: audit: type=1334 audit(1769193740.643:147): prog-id=8 op=UNLOAD Jan 23 18:42:20.647402 kernel: audit: type=1334 audit(1769193740.643:148): prog-id=7 op=UNLOAD Jan 23 18:42:20.647419 kernel: audit: type=1334 audit(1769193740.643:149): prog-id=28 op=LOAD Jan 23 18:42:20.647434 kernel: audit: type=1334 audit(1769193740.643:150): prog-id=29 op=LOAD Jan 23 18:42:20.643000 audit: BPF prog-id=7 op=UNLOAD Jan 23 18:42:20.643000 audit: BPF prog-id=28 op=LOAD Jan 23 18:42:20.643000 audit: BPF prog-id=29 op=LOAD Jan 23 18:42:20.644882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:42:20.674000 audit: BPF prog-id=30 op=LOAD Jan 23 18:42:20.674000 audit: BPF prog-id=18 op=UNLOAD Jan 23 18:42:20.674000 audit: BPF prog-id=31 op=LOAD Jan 23 18:42:20.674000 audit: BPF prog-id=32 op=LOAD Jan 23 18:42:20.674000 audit: BPF prog-id=19 op=UNLOAD Jan 23 18:42:20.674000 audit: BPF prog-id=20 op=UNLOAD Jan 23 18:42:20.676000 audit: BPF prog-id=33 op=LOAD Jan 23 18:42:20.677000 audit: BPF prog-id=21 op=UNLOAD Jan 23 18:42:20.678352 kernel: audit: type=1334 audit(1769193740.674:151): prog-id=30 op=LOAD Jan 23 18:42:20.678000 audit: BPF prog-id=34 op=LOAD Jan 23 18:42:20.678000 audit: BPF prog-id=25 op=UNLOAD Jan 23 18:42:20.678000 audit: BPF prog-id=35 op=LOAD Jan 23 18:42:20.678000 audit: BPF prog-id=36 op=LOAD Jan 23 18:42:20.678000 audit: BPF prog-id=26 op=UNLOAD Jan 23 18:42:20.679000 audit: BPF prog-id=27 op=UNLOAD Jan 23 18:42:20.681000 audit: BPF prog-id=37 op=LOAD Jan 23 18:42:20.681000 audit: BPF prog-id=15 op=UNLOAD Jan 23 18:42:20.682000 audit: BPF prog-id=38 op=LOAD Jan 23 18:42:20.682000 audit: BPF prog-id=39 op=LOAD Jan 23 18:42:20.682000 audit: BPF prog-id=16 op=UNLOAD Jan 23 18:42:20.682000 audit: BPF prog-id=17 op=UNLOAD Jan 23 18:42:20.683000 audit: BPF prog-id=40 op=LOAD Jan 23 18:42:20.684000 audit: BPF prog-id=22 op=UNLOAD Jan 23 18:42:20.684000 audit: BPF prog-id=41 op=LOAD Jan 23 18:42:20.684000 audit: BPF prog-id=42 op=LOAD Jan 23 18:42:20.684000 audit: BPF prog-id=23 op=UNLOAD Jan 23 18:42:20.684000 audit: BPF prog-id=24 op=UNLOAD Jan 23 18:42:20.692869 systemd[1]: Reload requested from client PID 1376 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:42:20.692910 systemd[1]: Reloading... Jan 23 18:42:20.700021 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:42:20.700112 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:42:20.700900 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:42:20.703766 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 23 18:42:20.703912 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 23 18:42:20.716122 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:42:20.716166 systemd-tmpfiles[1377]: Skipping /boot Jan 23 18:42:20.717796 systemd-udevd[1378]: Using default interface naming scheme 'v257'. Jan 23 18:42:20.750868 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:42:20.750890 systemd-tmpfiles[1377]: Skipping /boot Jan 23 18:42:20.754319 zram_generator::config[1408]: No configuration found. Jan 23 18:42:20.916351 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:42:20.944335 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 18:42:20.952407 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:42:21.011344 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:42:21.012050 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:42:21.140637 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:42:21.142195 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 18:42:21.147763 systemd[1]: Reloading finished in 454 ms. Jan 23 18:42:21.166545 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:42:21.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.183000 audit: BPF prog-id=43 op=LOAD Jan 23 18:42:21.183000 audit: BPF prog-id=37 op=UNLOAD Jan 23 18:42:21.183000 audit: BPF prog-id=44 op=LOAD Jan 23 18:42:21.183000 audit: BPF prog-id=45 op=LOAD Jan 23 18:42:21.183000 audit: BPF prog-id=38 op=UNLOAD Jan 23 18:42:21.184000 audit: BPF prog-id=39 op=UNLOAD Jan 23 18:42:21.184000 audit: BPF prog-id=46 op=LOAD Jan 23 18:42:21.184000 audit: BPF prog-id=47 op=LOAD Jan 23 18:42:21.184000 audit: BPF prog-id=28 op=UNLOAD Jan 23 18:42:21.184000 audit: BPF prog-id=29 op=UNLOAD Jan 23 18:42:21.186000 audit: BPF prog-id=48 op=LOAD Jan 23 18:42:21.186000 audit: BPF prog-id=30 op=UNLOAD Jan 23 18:42:21.186000 audit: BPF prog-id=49 op=LOAD Jan 23 18:42:21.186000 audit: BPF prog-id=50 op=LOAD Jan 23 18:42:21.186000 audit: BPF prog-id=31 op=UNLOAD Jan 23 18:42:21.186000 audit: BPF prog-id=32 op=UNLOAD Jan 23 18:42:21.198000 audit: BPF prog-id=51 op=LOAD Jan 23 18:42:21.198000 audit: BPF prog-id=33 op=UNLOAD Jan 23 18:42:21.200000 audit: BPF prog-id=52 op=LOAD Jan 23 18:42:21.200000 audit: BPF prog-id=40 op=UNLOAD Jan 23 18:42:21.200000 audit: BPF prog-id=53 op=LOAD Jan 23 18:42:21.200000 audit: BPF prog-id=54 op=LOAD Jan 23 18:42:21.200000 audit: BPF prog-id=41 op=UNLOAD Jan 23 18:42:21.200000 audit: BPF prog-id=42 op=UNLOAD Jan 23 18:42:21.201000 audit: BPF prog-id=55 op=LOAD Jan 23 18:42:21.201000 audit: BPF prog-id=34 op=UNLOAD Jan 23 18:42:21.201000 audit: BPF prog-id=56 op=LOAD Jan 23 18:42:21.201000 audit: BPF prog-id=57 op=LOAD Jan 23 18:42:21.201000 audit: BPF prog-id=35 op=UNLOAD Jan 23 18:42:21.201000 audit: BPF prog-id=36 op=UNLOAD Jan 23 18:42:21.202593 kernel: kvm_amd: TSC scaling supported Jan 23 18:42:21.202624 kernel: kvm_amd: Nested Virtualization enabled Jan 23 18:42:21.202639 kernel: kvm_amd: Nested Paging enabled Jan 23 18:42:21.203812 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 18:42:21.205349 kernel: kvm_amd: PMU virtualization is disabled Jan 23 18:42:21.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.207233 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:42:21.264397 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:42:21.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.265716 systemd[1]: Finished ensure-sysext.service. Jan 23 18:42:21.293358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:42:21.295142 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:42:21.300080 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:42:21.304396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:42:21.311419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:42:21.317579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:42:21.322779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:42:21.332494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:42:21.336479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:42:21.336621 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 18:42:21.341471 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:42:21.347871 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:42:21.352011 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:42:21.353630 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:42:21.359000 audit: BPF prog-id=58 op=LOAD Jan 23 18:42:21.361041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:42:21.363000 audit: BPF prog-id=59 op=LOAD Jan 23 18:42:21.364810 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:42:21.373623 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:42:21.379529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:42:21.382680 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:42:21.386366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:42:21.387306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:42:21.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.391183 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:42:21.392448 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:42:21.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.397865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:42:21.398311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:42:21.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.401222 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:42:21.401646 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:42:21.407683 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:42:21.407806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:42:21.419231 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:42:21.419000 audit[1517]: SYSTEM_BOOT pid=1517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.436233 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:42:21.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.440640 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:42:21.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:21.452000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 18:42:21.452000 audit[1540]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe11fb6de0 a2=420 a3=0 items=0 ppid=1493 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:21.452000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:42:21.453697 augenrules[1540]: No rules Jan 23 18:42:21.454875 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:42:21.455335 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:42:21.460639 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:42:21.464352 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:42:21.511073 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:42:21.515709 systemd-networkd[1511]: lo: Link UP Jan 23 18:42:21.515736 systemd-networkd[1511]: lo: Gained carrier Jan 23 18:42:21.519084 systemd-networkd[1511]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:42:21.519094 systemd-networkd[1511]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:42:21.520403 systemd-networkd[1511]: eth0: Link UP Jan 23 18:42:21.520836 systemd-networkd[1511]: eth0: Gained carrier Jan 23 18:42:21.520864 systemd-networkd[1511]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 18:42:21.537881 systemd-networkd[1511]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 18:42:21.548819 systemd-timesyncd[1512]: Network configuration changed, trying to establish connection. Jan 23 18:42:21.550602 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 18:42:21.550658 systemd-timesyncd[1512]: Initial clock synchronization to Fri 2026-01-23 18:42:21.437982 UTC. Jan 23 18:42:21.655394 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:42:21.660112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:42:21.666927 systemd[1]: Reached target network.target - Network. Jan 23 18:42:21.669682 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:42:21.675024 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:42:21.681745 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:42:21.710794 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:42:21.917498 ldconfig[1505]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:42:21.923508 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:42:21.929487 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:42:21.965042 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:42:21.968581 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:42:21.971623 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:42:21.974903 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:42:21.978364 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:42:21.981705 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:42:21.984686 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:42:21.988034 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 18:42:21.991473 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 18:42:21.994391 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:42:21.997645 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:42:21.997713 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:42:22.000188 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:42:22.004573 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:42:22.009480 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:42:22.015001 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:42:22.018342 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:42:22.021396 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:42:22.027141 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:42:22.030548 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:42:22.034502 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:42:22.038124 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:42:22.040547 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:42:22.043093 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:42:22.043142 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:42:22.044696 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:42:22.048576 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:42:22.054457 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:42:22.058711 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:42:22.063152 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:42:22.065665 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:42:22.067046 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:42:22.067697 jq[1562]: false Jan 23 18:42:22.070887 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:42:22.073194 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:42:22.084773 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:42:22.092953 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing passwd entry cache Jan 23 18:42:22.091491 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:42:22.091199 oslogin_cache_refresh[1564]: Refreshing passwd entry cache Jan 23 18:42:22.098365 extend-filesystems[1563]: Found /dev/vda6 Jan 23 18:42:22.100626 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:42:22.105911 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:42:22.106696 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:42:22.107810 extend-filesystems[1563]: Found /dev/vda9 Jan 23 18:42:22.109044 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:42:22.115308 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting users, quitting Jan 23 18:42:22.115308 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:42:22.115308 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Refreshing group entry cache Jan 23 18:42:22.113766 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:42:22.113511 oslogin_cache_refresh[1564]: Failure getting users, quitting Jan 23 18:42:22.113534 oslogin_cache_refresh[1564]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:42:22.113586 oslogin_cache_refresh[1564]: Refreshing group entry cache Jan 23 18:42:22.117169 extend-filesystems[1563]: Checking size of /dev/vda9 Jan 23 18:42:22.126644 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:42:22.131067 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:42:22.131755 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:42:22.132624 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:42:22.132894 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Failure getting groups, quitting Jan 23 18:42:22.132894 google_oslogin_nss_cache[1564]: oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:42:22.132879 oslogin_cache_refresh[1564]: Failure getting groups, quitting Jan 23 18:42:22.132895 oslogin_cache_refresh[1564]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:42:22.133078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:42:22.136243 jq[1584]: true Jan 23 18:42:22.137933 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:42:22.138368 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:42:22.141901 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:42:22.142242 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:42:22.146854 extend-filesystems[1563]: Resized partition /dev/vda9 Jan 23 18:42:22.154118 extend-filesystems[1599]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:42:22.157554 update_engine[1579]: I20260123 18:42:22.154191 1579 main.cc:92] Flatcar Update Engine starting Jan 23 18:42:22.162325 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 23 18:42:22.172351 jq[1597]: true Jan 23 18:42:22.199364 tar[1594]: linux-amd64/LICENSE Jan 23 18:42:22.201900 tar[1594]: linux-amd64/helm Jan 23 18:42:22.206594 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 23 18:42:22.224248 extend-filesystems[1599]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 18:42:22.224248 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 18:42:22.224248 extend-filesystems[1599]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 23 18:42:22.233513 extend-filesystems[1563]: Resized filesystem in /dev/vda9 Jan 23 18:42:22.227953 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:42:22.235743 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:42:22.249369 dbus-daemon[1560]: [system] SELinux support is enabled Jan 23 18:42:22.253856 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:42:22.262692 update_engine[1579]: I20260123 18:42:22.262614 1579 update_check_scheduler.cc:74] Next update check in 6m21s Jan 23 18:42:22.263402 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 18:42:22.263453 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:42:22.266948 systemd-logind[1574]: New seat seat0. Jan 23 18:42:22.269313 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:42:22.272794 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:42:22.276104 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:42:22.276160 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:42:22.279912 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:42:22.279956 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:42:22.284435 bash[1629]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:42:22.286120 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:42:22.290972 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:42:22.301700 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:42:22.342791 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:42:22.397295 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:42:22.420557 containerd[1598]: time="2026-01-23T18:42:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:42:22.422051 containerd[1598]: time="2026-01-23T18:42:22.421928612Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 18:42:22.425891 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:42:22.432432 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:42:22.433328 containerd[1598]: time="2026-01-23T18:42:22.433206870Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.192µs" Jan 23 18:42:22.433328 containerd[1598]: time="2026-01-23T18:42:22.433290198Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:42:22.433385 containerd[1598]: time="2026-01-23T18:42:22.433332598Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:42:22.433385 containerd[1598]: time="2026-01-23T18:42:22.433345660Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:42:22.433563 containerd[1598]: time="2026-01-23T18:42:22.433503860Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:42:22.433563 containerd[1598]: time="2026-01-23T18:42:22.433543256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:42:22.433665 containerd[1598]: time="2026-01-23T18:42:22.433614687Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:42:22.433665 containerd[1598]: time="2026-01-23T18:42:22.433650182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:42:22.433949 containerd[1598]: time="2026-01-23T18:42:22.433867187Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:42:22.433949 containerd[1598]: time="2026-01-23T18:42:22.433942477Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:42:22.434002 containerd[1598]: time="2026-01-23T18:42:22.433957956Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:42:22.434002 containerd[1598]: time="2026-01-23T18:42:22.433969109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:42:22.434235 containerd[1598]: time="2026-01-23T18:42:22.434153493Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 18:42:22.434235 containerd[1598]: time="2026-01-23T18:42:22.434205443Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:42:22.434433 containerd[1598]: time="2026-01-23T18:42:22.434378297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:42:22.434729 containerd[1598]: time="2026-01-23T18:42:22.434658942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:42:22.434761 containerd[1598]: time="2026-01-23T18:42:22.434733804Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:42:22.434761 containerd[1598]: time="2026-01-23T18:42:22.434746796Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:42:22.434810 containerd[1598]: time="2026-01-23T18:42:22.434774234Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:42:22.435065 containerd[1598]: time="2026-01-23T18:42:22.435012002Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:42:22.435308 containerd[1598]: time="2026-01-23T18:42:22.435105864Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:42:22.440166 containerd[1598]: time="2026-01-23T18:42:22.440107512Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:42:22.440166 containerd[1598]: time="2026-01-23T18:42:22.440161125Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:42:22.440318 containerd[1598]: time="2026-01-23T18:42:22.440234793Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 18:42:22.440318 containerd[1598]: time="2026-01-23T18:42:22.440313426Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:42:22.440367 containerd[1598]: time="2026-01-23T18:42:22.440329413Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:42:22.440367 containerd[1598]: time="2026-01-23T18:42:22.440341231Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:42:22.440367 containerd[1598]: time="2026-01-23T18:42:22.440351507Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:42:22.440367 containerd[1598]: time="2026-01-23T18:42:22.440360561Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:42:22.440443 containerd[1598]: time="2026-01-23T18:42:22.440370986Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:42:22.440443 containerd[1598]: time="2026-01-23T18:42:22.440381323Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:42:22.440443 containerd[1598]: time="2026-01-23T18:42:22.440391002Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:42:22.440443 containerd[1598]: time="2026-01-23T18:42:22.440400115Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:42:22.440443 containerd[1598]: time="2026-01-23T18:42:22.440413188Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:42:22.440443 containerd[1598]: time="2026-01-23T18:42:22.440423633Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:42:22.440547 containerd[1598]: time="2026-01-23T18:42:22.440533543Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:42:22.440593 containerd[1598]: time="2026-01-23T18:42:22.440564503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:42:22.440616 containerd[1598]: time="2026-01-23T18:42:22.440596587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:42:22.440616 containerd[1598]: time="2026-01-23T18:42:22.440612614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:42:22.440668 containerd[1598]: time="2026-01-23T18:42:22.440622144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:42:22.440668 containerd[1598]: time="2026-01-23T18:42:22.440630730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:42:22.440668 containerd[1598]: time="2026-01-23T18:42:22.440646647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:42:22.440668 containerd[1598]: time="2026-01-23T18:42:22.440656955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:42:22.440668 containerd[1598]: time="2026-01-23T18:42:22.440665957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:42:22.440750 containerd[1598]: time="2026-01-23T18:42:22.440674662Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:42:22.440750 containerd[1598]: time="2026-01-23T18:42:22.440683835Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:42:22.440750 containerd[1598]: time="2026-01-23T18:42:22.440702478Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:42:22.440750 containerd[1598]: time="2026-01-23T18:42:22.440737139Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:42:22.440750 containerd[1598]: time="2026-01-23T18:42:22.440747634Z" level=info msg="Start snapshots syncer" Jan 23 18:42:22.440830 containerd[1598]: time="2026-01-23T18:42:22.440764337Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:42:22.441074 containerd[1598]: time="2026-01-23T18:42:22.440998373Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:42:22.441074 containerd[1598]: time="2026-01-23T18:42:22.441064133Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:42:22.441227 containerd[1598]: time="2026-01-23T18:42:22.441103817Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:42:22.441227 containerd[1598]: time="2026-01-23T18:42:22.441208266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:42:22.441227 containerd[1598]: time="2026-01-23T18:42:22.441226193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441236310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441303432Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441318862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441328751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441338262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441347195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441357134Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441384950Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441395794Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441403434Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441412199Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:42:22.441411 containerd[1598]: time="2026-01-23T18:42:22.441419730Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:42:22.441697 containerd[1598]: time="2026-01-23T18:42:22.441429917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:42:22.441697 containerd[1598]: time="2026-01-23T18:42:22.441439715Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:42:22.441697 containerd[1598]: time="2026-01-23T18:42:22.441456836Z" level=info msg="runtime interface created" Jan 23 18:42:22.441697 containerd[1598]: time="2026-01-23T18:42:22.441462677Z" level=info msg="created NRI interface" Jan 23 18:42:22.441697 containerd[1598]: time="2026-01-23T18:42:22.441489081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:42:22.441697 containerd[1598]: time="2026-01-23T18:42:22.441499336Z" level=info msg="Connect containerd service" Jan 23 18:42:22.441697 containerd[1598]: time="2026-01-23T18:42:22.441514577Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:42:22.442234 containerd[1598]: time="2026-01-23T18:42:22.442163722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:42:22.453063 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:42:22.453448 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:42:22.459967 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:42:22.487150 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:42:22.493515 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:42:22.499618 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:42:22.502809 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:42:22.550229 containerd[1598]: time="2026-01-23T18:42:22.550144381Z" level=info msg="Start subscribing containerd event" Jan 23 18:42:22.550229 containerd[1598]: time="2026-01-23T18:42:22.550203365Z" level=info msg="Start recovering state" Jan 23 18:42:22.550431 containerd[1598]: time="2026-01-23T18:42:22.550327203Z" level=info msg="Start event monitor" Jan 23 18:42:22.550431 containerd[1598]: time="2026-01-23T18:42:22.550340543Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:42:22.550431 containerd[1598]: time="2026-01-23T18:42:22.550349656Z" level=info msg="Start streaming server" Jan 23 18:42:22.550431 containerd[1598]: time="2026-01-23T18:42:22.550358252Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:42:22.550431 containerd[1598]: time="2026-01-23T18:42:22.550365465Z" level=info msg="runtime interface starting up..." Jan 23 18:42:22.550431 containerd[1598]: time="2026-01-23T18:42:22.550372199Z" level=info msg="starting plugins..." Jan 23 18:42:22.550431 containerd[1598]: time="2026-01-23T18:42:22.550387858Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:42:22.550853 containerd[1598]: time="2026-01-23T18:42:22.550818615Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:42:22.550955 containerd[1598]: time="2026-01-23T18:42:22.550901078Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:42:22.551634 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:42:22.554535 containerd[1598]: time="2026-01-23T18:42:22.554444209Z" level=info msg="containerd successfully booted in 0.134748s" Jan 23 18:42:22.576729 tar[1594]: linux-amd64/README.md Jan 23 18:42:22.610117 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:42:23.607013 systemd-networkd[1511]: eth0: Gained IPv6LL Jan 23 18:42:23.646551 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:42:23.665034 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:42:23.687845 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 18:42:23.696739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:42:23.707238 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:42:23.864506 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:42:23.944585 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 18:42:23.955735 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 18:42:23.986681 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:42:26.080729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:42:26.086140 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:42:26.087403 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:42:26.090369 systemd[1]: Startup finished in 5.176s (kernel) + 8.382s (initrd) + 9.017s (userspace) = 22.576s. Jan 23 18:42:27.624965 kubelet[1701]: E0123 18:42:27.624591 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:42:27.630031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:42:27.630494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:42:27.631498 systemd[1]: kubelet.service: Consumed 2.534s CPU time, 265.7M memory peak. Jan 23 18:42:31.623669 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:42:31.625571 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:55660.service - OpenSSH per-connection server daemon (10.0.0.1:55660). Jan 23 18:42:31.971186 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 55660 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:42:31.975567 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:31.992683 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:42:31.994516 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:42:32.001856 systemd-logind[1574]: New session 1 of user core. Jan 23 18:42:32.043627 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:42:32.049245 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:42:32.084149 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:32.088493 systemd-logind[1574]: New session 2 of user core. Jan 23 18:42:32.362856 systemd[1720]: Queued start job for default target default.target. Jan 23 18:42:32.393740 systemd[1720]: Created slice app.slice - User Application Slice. Jan 23 18:42:32.393853 systemd[1720]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 18:42:32.393890 systemd[1720]: Reached target paths.target - Paths. Jan 23 18:42:32.395510 systemd[1720]: Reached target timers.target - Timers. Jan 23 18:42:32.408509 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:42:32.416545 systemd[1720]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 18:42:32.499777 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:42:32.499922 systemd[1720]: Reached target sockets.target - Sockets. Jan 23 18:42:32.502383 systemd[1720]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 18:42:32.502784 systemd[1720]: Reached target basic.target - Basic System. Jan 23 18:42:32.509021 systemd[1720]: Reached target default.target - Main User Target. Jan 23 18:42:32.509134 systemd[1720]: Startup finished in 408ms. Jan 23 18:42:32.509934 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:42:32.550763 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:42:32.583561 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:37698.service - OpenSSH per-connection server daemon (10.0.0.1:37698). Jan 23 18:42:32.669167 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 37698 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:42:32.671240 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:32.678118 systemd-logind[1574]: New session 3 of user core. Jan 23 18:42:32.688511 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:42:32.707491 sshd[1738]: Connection closed by 10.0.0.1 port 37698 Jan 23 18:42:32.707966 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:32.728554 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:37698.service: Deactivated successfully. Jan 23 18:42:32.733390 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:42:32.734441 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:42:32.737697 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:37710.service - OpenSSH per-connection server daemon (10.0.0.1:37710). Jan 23 18:42:32.738412 systemd-logind[1574]: Removed session 3. Jan 23 18:42:32.814619 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 37710 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:42:32.816545 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:32.822880 systemd-logind[1574]: New session 4 of user core. Jan 23 18:42:32.835130 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:42:32.853064 sshd[1748]: Connection closed by 10.0.0.1 port 37710 Jan 23 18:42:32.853625 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:32.865110 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:37710.service: Deactivated successfully. Jan 23 18:42:32.867166 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:42:32.868357 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:42:32.871796 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:37726.service - OpenSSH per-connection server daemon (10.0.0.1:37726). Jan 23 18:42:32.872488 systemd-logind[1574]: Removed session 4. Jan 23 18:42:32.939353 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 37726 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:42:32.941751 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:32.948683 systemd-logind[1574]: New session 5 of user core. Jan 23 18:42:32.963567 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:42:32.981312 sshd[1760]: Connection closed by 10.0.0.1 port 37726 Jan 23 18:42:32.981580 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:32.999336 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:37726.service: Deactivated successfully. Jan 23 18:42:33.001498 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:42:33.002555 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:42:33.005897 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:37738.service - OpenSSH per-connection server daemon (10.0.0.1:37738). Jan 23 18:42:33.006708 systemd-logind[1574]: Removed session 5. Jan 23 18:42:33.076054 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 37738 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:42:33.077804 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:33.084060 systemd-logind[1574]: New session 6 of user core. Jan 23 18:42:33.095495 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:42:33.129920 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:42:33.130454 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:42:33.148057 sudo[1771]: pam_unix(sudo:session): session closed for user root Jan 23 18:42:33.150147 sshd[1770]: Connection closed by 10.0.0.1 port 37738 Jan 23 18:42:33.150431 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:33.165526 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:37738.service: Deactivated successfully. Jan 23 18:42:33.167678 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:42:33.168832 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:42:33.172376 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:37748.service - OpenSSH per-connection server daemon (10.0.0.1:37748). Jan 23 18:42:33.173395 systemd-logind[1574]: Removed session 6. Jan 23 18:42:33.254827 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 37748 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:42:33.257533 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:33.266244 systemd-logind[1574]: New session 7 of user core. Jan 23 18:42:33.319946 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:42:33.422733 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:42:33.423461 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:42:33.431655 sudo[1784]: pam_unix(sudo:session): session closed for user root Jan 23 18:42:33.444099 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:42:33.444639 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:42:33.456833 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:42:33.532000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:42:33.534532 augenrules[1808]: No rules Jan 23 18:42:33.535979 kernel: kauditd_printk_skb: 75 callbacks suppressed Jan 23 18:42:33.536155 kernel: audit: type=1305 audit(1769193753.532:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 18:42:33.537013 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:42:33.537610 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:42:33.539041 sudo[1783]: pam_unix(sudo:session): session closed for user root Jan 23 18:42:33.532000 audit[1808]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc1e795a0 a2=420 a3=0 items=0 ppid=1789 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:33.550611 kernel: audit: type=1300 audit(1769193753.532:225): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc1e795a0 a2=420 a3=0 items=0 ppid=1789 pid=1808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:33.532000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:42:33.550869 sshd[1782]: Connection closed by 10.0.0.1 port 37748 Jan 23 18:42:33.551518 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jan 23 18:42:33.555236 kernel: audit: type=1327 audit(1769193753.532:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 18:42:33.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.563619 kernel: audit: type=1130 audit(1769193753.536:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.563689 kernel: audit: type=1131 audit(1769193753.536:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.537000 audit[1783]: USER_END pid=1783 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.581095 kernel: audit: type=1106 audit(1769193753.537:228): pid=1783 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.581147 kernel: audit: type=1104 audit(1769193753.538:229): pid=1783 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.538000 audit[1783]: CRED_DISP pid=1783 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.590182 kernel: audit: type=1106 audit(1769193753.552:230): pid=1778 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:42:33.552000 audit[1778]: USER_END pid=1778 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:42:33.552000 audit[1778]: CRED_DISP pid=1778 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:42:33.616660 kernel: audit: type=1104 audit(1769193753.552:231): pid=1778 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:42:33.692896 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:37748.service: Deactivated successfully. Jan 23 18:42:33.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.138:22-10.0.0.1:37748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.709906 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:42:33.713441 kernel: audit: type=1131 audit(1769193753.704:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.138:22-10.0.0.1:37748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.714600 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:42:33.735537 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:37750.service - OpenSSH per-connection server daemon (10.0.0.1:37750). Jan 23 18:42:33.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:37750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.737808 systemd-logind[1574]: Removed session 7. Jan 23 18:42:33.886000 audit[1817]: USER_ACCT pid=1817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:42:33.888253 sshd[1817]: Accepted publickey for core from 10.0.0.1 port 37750 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:42:33.888000 audit[1817]: CRED_ACQ pid=1817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:42:33.888000 audit[1817]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe84eb6cd0 a2=3 a3=0 items=0 ppid=1 pid=1817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:33.888000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:42:33.890551 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:42:33.898448 systemd-logind[1574]: New session 8 of user core. Jan 23 18:42:33.913598 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:42:33.915000 audit[1817]: USER_START pid=1817 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:42:33.918000 audit[1821]: CRED_ACQ pid=1821 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:42:33.945000 audit[1822]: USER_ACCT pid=1822 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.946891 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:42:33.945000 audit[1822]: CRED_REFR pid=1822 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:42:33.947609 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:42:33.946000 audit[1822]: USER_START pid=1822 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:42:35.999225 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:42:36.033976 (dockerd)[1844]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:42:37.191578 dockerd[1844]: time="2026-01-23T18:42:37.191173410Z" level=info msg="Starting up" Jan 23 18:42:37.193136 dockerd[1844]: time="2026-01-23T18:42:37.192633553Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:42:37.791210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:42:37.935340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:42:37.946837 dockerd[1844]: time="2026-01-23T18:42:37.946628252Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:42:38.091357 dockerd[1844]: time="2026-01-23T18:42:38.090991007Z" level=info msg="Loading containers: start." Jan 23 18:42:38.464836 kernel: Initializing XFRM netlink socket Jan 23 18:42:38.628782 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 23 18:42:38.628899 kernel: audit: type=1325 audit(1769193758.624:242): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.624000 audit[1903]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.643972 kernel: audit: type=1300 audit(1769193758.624:242): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd2cce28a0 a2=0 a3=0 items=0 ppid=1844 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.624000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd2cce28a0 a2=0 a3=0 items=0 ppid=1844 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.624000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:42:38.631000 audit[1906]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.654009 kernel: audit: type=1327 audit(1769193758.624:242): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:42:38.654172 kernel: audit: type=1325 audit(1769193758.631:243): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.654243 kernel: audit: type=1300 audit(1769193758.631:243): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe86904030 a2=0 a3=0 items=0 ppid=1844 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.631000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe86904030 a2=0 a3=0 items=0 ppid=1844 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:42:38.668717 kernel: audit: type=1327 audit(1769193758.631:243): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:42:38.668834 kernel: audit: type=1325 audit(1769193758.635:244): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.635000 audit[1909]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.635000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd21fa2f90 a2=0 a3=0 items=0 ppid=1844 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.698373 kernel: audit: type=1300 audit(1769193758.635:244): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd21fa2f90 a2=0 a3=0 items=0 ppid=1844 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.703703 kernel: audit: type=1327 audit(1769193758.635:244): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:42:38.635000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:42:38.639000 audit[1911]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.739836 kernel: audit: type=1325 audit(1769193758.639:245): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.639000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0ca717a0 a2=0 a3=0 items=0 ppid=1844 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.639000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:42:38.643000 audit[1913]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.643000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffefe4d2370 a2=0 a3=0 items=0 ppid=1844 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:42:38.647000 audit[1915]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.647000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd313f97d0 a2=0 a3=0 items=0 ppid=1844 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:42:38.650000 audit[1917]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.650000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff32210d80 a2=0 a3=0 items=0 ppid=1844 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.650000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:42:38.852483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:42:38.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:38.656000 audit[1919]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.656000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd26196400 a2=0 a3=0 items=0 ppid=1844 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.656000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:42:38.877844 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:42:38.880000 audit[1924]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.880000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff3819e160 a2=0 a3=0 items=0 ppid=1844 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.880000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 18:42:38.884000 audit[1926]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.884000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff9e4de350 a2=0 a3=0 items=0 ppid=1844 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:42:38.888000 audit[1928]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.888000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe66a4ed30 a2=0 a3=0 items=0 ppid=1844 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:42:38.893000 audit[1930]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.893000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffeacf221d0 a2=0 a3=0 items=0 ppid=1844 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:42:38.898000 audit[1937]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:38.898000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffeba83130 a2=0 a3=0 items=0 ppid=1844 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:38.898000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:42:39.070000 audit[1969]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.070000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe67e40350 a2=0 a3=0 items=0 ppid=1844 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.070000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 18:42:39.125403 kubelet[1921]: E0123 18:42:39.125104 1921 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:42:39.128000 audit[1971]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.128000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffedaec6f00 a2=0 a3=0 items=0 ppid=1844 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 18:42:39.133555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:42:39.132000 audit[1973]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.132000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2618ee20 a2=0 a3=0 items=0 ppid=1844 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.132000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 18:42:39.133844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:42:39.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:42:39.134675 systemd[1]: kubelet.service: Consumed 960ms CPU time, 111.1M memory peak. Jan 23 18:42:39.136000 audit[1976]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.136000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd75ba2200 a2=0 a3=0 items=0 ppid=1844 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.136000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 18:42:39.141000 audit[1978]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.141000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb9c74660 a2=0 a3=0 items=0 ppid=1844 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.141000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 18:42:39.146000 audit[1980]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.146000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd99426440 a2=0 a3=0 items=0 ppid=1844 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:42:39.151000 audit[1982]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.151000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc4fc09300 a2=0 a3=0 items=0 ppid=1844 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:42:39.156000 audit[1984]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.156000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe98c96620 a2=0 a3=0 items=0 ppid=1844 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 18:42:39.161000 audit[1986]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.161000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff2c372530 a2=0 a3=0 items=0 ppid=1844 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 18:42:39.166000 audit[1988]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.166000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd3a3cd8d0 a2=0 a3=0 items=0 ppid=1844 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 18:42:39.169000 audit[1990]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.169000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffeda5687a0 a2=0 a3=0 items=0 ppid=1844 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 18:42:39.173000 audit[1992]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.173000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffe67cce30 a2=0 a3=0 items=0 ppid=1844 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 18:42:39.177000 audit[1994]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.177000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffda5afd7f0 a2=0 a3=0 items=0 ppid=1844 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 18:42:39.187000 audit[1999]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.187000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff47996cd0 a2=0 a3=0 items=0 ppid=1844 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:42:39.191000 audit[2001]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.191000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd377047e0 a2=0 a3=0 items=0 ppid=1844 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.191000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:42:39.195000 audit[2003]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.195000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdeb1b1040 a2=0 a3=0 items=0 ppid=1844 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.195000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:42:39.198000 audit[2005]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.198000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb9cd6b70 a2=0 a3=0 items=0 ppid=1844 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 18:42:39.202000 audit[2007]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.202000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffee90b7e80 a2=0 a3=0 items=0 ppid=1844 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.202000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 18:42:39.212000 audit[2009]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:42:39.212000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd6741c600 a2=0 a3=0 items=0 ppid=1844 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 18:42:39.243000 audit[2013]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.243000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd5accd660 a2=0 a3=0 items=0 ppid=1844 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 18:42:39.249000 audit[2016]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.249000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff7cdd0db0 a2=0 a3=0 items=0 ppid=1844 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 18:42:39.292000 audit[2024]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.292000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd85c8c1c0 a2=0 a3=0 items=0 ppid=1844 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 18:42:39.341000 audit[2030]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.341000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdcc304800 a2=0 a3=0 items=0 ppid=1844 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 18:42:39.352000 audit[2032]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.352000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe2b0a0900 a2=0 a3=0 items=0 ppid=1844 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 18:42:39.356000 audit[2034]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.356000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc24a687b0 a2=0 a3=0 items=0 ppid=1844 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.356000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 18:42:39.360000 audit[2036]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.360000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc56d46de0 a2=0 a3=0 items=0 ppid=1844 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 18:42:39.365000 audit[2038]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:42:39.365000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe3745da20 a2=0 a3=0 items=0 ppid=1844 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:42:39.365000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 18:42:39.367440 systemd-networkd[1511]: docker0: Link UP Jan 23 18:42:39.381059 dockerd[1844]: time="2026-01-23T18:42:39.380528723Z" level=info msg="Loading containers: done." Jan 23 18:42:39.440057 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3907765988-merged.mount: Deactivated successfully. Jan 23 18:42:39.444080 dockerd[1844]: time="2026-01-23T18:42:39.443959060Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:42:39.444226 dockerd[1844]: time="2026-01-23T18:42:39.444166951Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:42:39.444494 dockerd[1844]: time="2026-01-23T18:42:39.444425038Z" level=info msg="Initializing buildkit" Jan 23 18:42:39.487023 dockerd[1844]: time="2026-01-23T18:42:39.486906589Z" level=info msg="Completed buildkit initialization" Jan 23 18:42:39.498601 dockerd[1844]: time="2026-01-23T18:42:39.498425382Z" level=info msg="Daemon has completed initialization" Jan 23 18:42:39.498920 dockerd[1844]: time="2026-01-23T18:42:39.498699620Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:42:39.498967 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:42:39.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:42.136700 containerd[1598]: time="2026-01-23T18:42:42.136398083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 18:42:42.746401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740057005.mount: Deactivated successfully. Jan 23 18:42:45.777684 containerd[1598]: time="2026-01-23T18:42:45.777325288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:45.779447 containerd[1598]: time="2026-01-23T18:42:45.778304590Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 23 18:42:45.787986 containerd[1598]: time="2026-01-23T18:42:45.787859525Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:45.796579 containerd[1598]: time="2026-01-23T18:42:45.796426954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:45.801470 containerd[1598]: time="2026-01-23T18:42:45.801378026Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.664914424s" Jan 23 18:42:45.801470 containerd[1598]: time="2026-01-23T18:42:45.801450256Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 18:42:45.804620 containerd[1598]: time="2026-01-23T18:42:45.804462014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 18:42:49.182043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:42:49.209566 containerd[1598]: time="2026-01-23T18:42:49.208853577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:49.216459 containerd[1598]: time="2026-01-23T18:42:49.215288144Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 23 18:42:49.217635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:42:49.219466 containerd[1598]: time="2026-01-23T18:42:49.219375990Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:49.237754 containerd[1598]: time="2026-01-23T18:42:49.236716717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:49.246849 containerd[1598]: time="2026-01-23T18:42:49.246540493Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.441978086s" Jan 23 18:42:49.246849 containerd[1598]: time="2026-01-23T18:42:49.246773532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 18:42:49.260813 containerd[1598]: time="2026-01-23T18:42:49.257492241Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 18:42:49.981510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:42:49.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:49.988651 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 23 18:42:49.989256 kernel: audit: type=1130 audit(1769193769.980:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:42:50.019907 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:42:50.421360 kubelet[2153]: E0123 18:42:50.421100 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:42:50.427192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:42:50.427618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:42:50.428540 systemd[1]: kubelet.service: Consumed 1.028s CPU time, 111.3M memory peak. Jan 23 18:42:50.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:42:50.438387 kernel: audit: type=1131 audit(1769193770.428:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:42:52.910698 containerd[1598]: time="2026-01-23T18:42:52.910521444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:52.912447 containerd[1598]: time="2026-01-23T18:42:52.911976930Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 23 18:42:52.918431 containerd[1598]: time="2026-01-23T18:42:52.918355490Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:52.921805 containerd[1598]: time="2026-01-23T18:42:52.921759895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:52.923118 containerd[1598]: time="2026-01-23T18:42:52.923082697Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.665503369s" Jan 23 18:42:52.923118 containerd[1598]: time="2026-01-23T18:42:52.923119939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 18:42:52.925427 containerd[1598]: time="2026-01-23T18:42:52.925356944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 18:42:55.508583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200445468.mount: Deactivated successfully. Jan 23 18:42:56.697376 containerd[1598]: time="2026-01-23T18:42:56.696982269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:56.699911 containerd[1598]: time="2026-01-23T18:42:56.698149911Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 23 18:42:56.699911 containerd[1598]: time="2026-01-23T18:42:56.699612587Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:56.702507 containerd[1598]: time="2026-01-23T18:42:56.702431209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:56.703392 containerd[1598]: time="2026-01-23T18:42:56.703201309Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.777776582s" Jan 23 18:42:56.703392 containerd[1598]: time="2026-01-23T18:42:56.703254637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 18:42:56.705254 containerd[1598]: time="2026-01-23T18:42:56.705154654Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 18:42:57.527745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1357177538.mount: Deactivated successfully. Jan 23 18:42:58.998521 containerd[1598]: time="2026-01-23T18:42:58.998227939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:59.000357 containerd[1598]: time="2026-01-23T18:42:59.000013628Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18308383" Jan 23 18:42:59.001778 containerd[1598]: time="2026-01-23T18:42:59.001688302Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:59.005612 containerd[1598]: time="2026-01-23T18:42:59.005523939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:42:59.007114 containerd[1598]: time="2026-01-23T18:42:59.007029350Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.301828917s" Jan 23 18:42:59.007114 containerd[1598]: time="2026-01-23T18:42:59.007086620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 18:42:59.009052 containerd[1598]: time="2026-01-23T18:42:59.009003667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:42:59.399467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209842761.mount: Deactivated successfully. Jan 23 18:42:59.407241 containerd[1598]: time="2026-01-23T18:42:59.407123944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:42:59.408109 containerd[1598]: time="2026-01-23T18:42:59.408069237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:42:59.410068 containerd[1598]: time="2026-01-23T18:42:59.409958584Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:42:59.412410 containerd[1598]: time="2026-01-23T18:42:59.412333241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:42:59.412948 containerd[1598]: time="2026-01-23T18:42:59.412906673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 403.860664ms" Jan 23 18:42:59.413060 containerd[1598]: time="2026-01-23T18:42:59.412950947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:42:59.413723 containerd[1598]: time="2026-01-23T18:42:59.413681066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 18:43:00.017169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531157310.mount: Deactivated successfully. Jan 23 18:43:00.647057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:43:00.649067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:00.920800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:00.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:00.928339 kernel: audit: type=1130 audit(1769193780.919:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:00.937686 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:43:00.980884 kubelet[2286]: E0123 18:43:00.980784 2286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:43:00.985699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:43:00.985946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:43:00.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:43:00.986525 systemd[1]: kubelet.service: Consumed 263ms CPU time, 108.7M memory peak. Jan 23 18:43:00.994304 kernel: audit: type=1131 audit(1769193780.985:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:43:01.870995 containerd[1598]: time="2026-01-23T18:43:01.870843652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:01.872376 containerd[1598]: time="2026-01-23T18:43:01.872202113Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 23 18:43:01.873508 containerd[1598]: time="2026-01-23T18:43:01.873439654Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:01.876991 containerd[1598]: time="2026-01-23T18:43:01.876868043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:01.877886 containerd[1598]: time="2026-01-23T18:43:01.877842909Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.464135379s" Jan 23 18:43:01.877886 containerd[1598]: time="2026-01-23T18:43:01.877872097Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 18:43:03.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:03.666541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:03.666763 systemd[1]: kubelet.service: Consumed 263ms CPU time, 108.7M memory peak. Jan 23 18:43:03.680738 kernel: audit: type=1130 audit(1769193783.665:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:03.681761 kernel: audit: type=1131 audit(1769193783.665:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:03.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:03.684752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:03.719857 systemd[1]: Reload requested from client PID 2330 ('systemctl') (unit session-8.scope)... Jan 23 18:43:03.719894 systemd[1]: Reloading... Jan 23 18:43:03.847748 zram_generator::config[2376]: No configuration found. Jan 23 18:43:04.122039 systemd[1]: Reloading finished in 401 ms. Jan 23 18:43:04.156000 audit: BPF prog-id=63 op=LOAD Jan 23 18:43:04.156000 audit: BPF prog-id=43 op=UNLOAD Jan 23 18:43:04.162623 kernel: audit: type=1334 audit(1769193784.156:291): prog-id=63 op=LOAD Jan 23 18:43:04.162690 kernel: audit: type=1334 audit(1769193784.156:292): prog-id=43 op=UNLOAD Jan 23 18:43:04.162774 kernel: audit: type=1334 audit(1769193784.156:293): prog-id=64 op=LOAD Jan 23 18:43:04.162819 kernel: audit: type=1334 audit(1769193784.156:294): prog-id=65 op=LOAD Jan 23 18:43:04.162873 kernel: audit: type=1334 audit(1769193784.156:295): prog-id=44 op=UNLOAD Jan 23 18:43:04.162924 kernel: audit: type=1334 audit(1769193784.156:296): prog-id=45 op=UNLOAD Jan 23 18:43:04.156000 audit: BPF prog-id=64 op=LOAD Jan 23 18:43:04.156000 audit: BPF prog-id=65 op=LOAD Jan 23 18:43:04.156000 audit: BPF prog-id=44 op=UNLOAD Jan 23 18:43:04.156000 audit: BPF prog-id=45 op=UNLOAD Jan 23 18:43:04.157000 audit: BPF prog-id=66 op=LOAD Jan 23 18:43:04.157000 audit: BPF prog-id=51 op=UNLOAD Jan 23 18:43:04.159000 audit: BPF prog-id=67 op=LOAD Jan 23 18:43:04.159000 audit: BPF prog-id=48 op=UNLOAD Jan 23 18:43:04.159000 audit: BPF prog-id=68 op=LOAD Jan 23 18:43:04.159000 audit: BPF prog-id=69 op=LOAD Jan 23 18:43:04.159000 audit: BPF prog-id=49 op=UNLOAD Jan 23 18:43:04.159000 audit: BPF prog-id=50 op=UNLOAD Jan 23 18:43:04.160000 audit: BPF prog-id=70 op=LOAD Jan 23 18:43:04.160000 audit: BPF prog-id=58 op=UNLOAD Jan 23 18:43:04.161000 audit: BPF prog-id=71 op=LOAD Jan 23 18:43:04.161000 audit: BPF prog-id=55 op=UNLOAD Jan 23 18:43:04.161000 audit: BPF prog-id=72 op=LOAD Jan 23 18:43:04.161000 audit: BPF prog-id=73 op=LOAD Jan 23 18:43:04.161000 audit: BPF prog-id=56 op=UNLOAD Jan 23 18:43:04.161000 audit: BPF prog-id=57 op=UNLOAD Jan 23 18:43:04.161000 audit: BPF prog-id=74 op=LOAD Jan 23 18:43:04.163000 audit: BPF prog-id=52 op=UNLOAD Jan 23 18:43:04.163000 audit: BPF prog-id=75 op=LOAD Jan 23 18:43:04.163000 audit: BPF prog-id=76 op=LOAD Jan 23 18:43:04.163000 audit: BPF prog-id=53 op=UNLOAD Jan 23 18:43:04.163000 audit: BPF prog-id=54 op=UNLOAD Jan 23 18:43:04.166000 audit: BPF prog-id=77 op=LOAD Jan 23 18:43:04.166000 audit: BPF prog-id=60 op=UNLOAD Jan 23 18:43:04.166000 audit: BPF prog-id=78 op=LOAD Jan 23 18:43:04.167000 audit: BPF prog-id=79 op=LOAD Jan 23 18:43:04.167000 audit: BPF prog-id=61 op=UNLOAD Jan 23 18:43:04.167000 audit: BPF prog-id=62 op=UNLOAD Jan 23 18:43:04.167000 audit: BPF prog-id=80 op=LOAD Jan 23 18:43:04.168000 audit: BPF prog-id=81 op=LOAD Jan 23 18:43:04.168000 audit: BPF prog-id=46 op=UNLOAD Jan 23 18:43:04.168000 audit: BPF prog-id=47 op=UNLOAD Jan 23 18:43:04.171000 audit: BPF prog-id=82 op=LOAD Jan 23 18:43:04.172000 audit: BPF prog-id=59 op=UNLOAD Jan 23 18:43:04.219476 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:43:04.219657 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:43:04.220330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:04.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 18:43:04.220484 systemd[1]: kubelet.service: Consumed 166ms CPU time, 98.4M memory peak. Jan 23 18:43:04.222763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:04.446537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:04.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:04.475382 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:43:04.841543 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:43:04.841543 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:43:04.841543 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:43:04.841543 kubelet[2424]: I0123 18:43:04.841397 2424 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:43:05.424300 kubelet[2424]: I0123 18:43:05.424087 2424 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 18:43:05.424300 kubelet[2424]: I0123 18:43:05.424183 2424 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:43:05.424804 kubelet[2424]: I0123 18:43:05.424755 2424 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 18:43:05.450729 kubelet[2424]: I0123 18:43:05.450673 2424 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:43:05.452353 kubelet[2424]: E0123 18:43:05.452250 2424 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:05.710233 kubelet[2424]: I0123 18:43:05.709175 2424 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:43:05.722196 kubelet[2424]: I0123 18:43:05.722094 2424 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:43:05.726143 kubelet[2424]: I0123 18:43:05.725972 2424 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:43:05.726558 kubelet[2424]: I0123 18:43:05.726116 2424 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:43:05.726887 kubelet[2424]: I0123 18:43:05.726563 2424 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:43:05.726887 kubelet[2424]: I0123 18:43:05.726576 2424 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 18:43:05.726887 kubelet[2424]: I0123 18:43:05.726786 2424 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:43:05.732613 kubelet[2424]: I0123 18:43:05.732539 2424 kubelet.go:446] "Attempting to sync node with API server" Jan 23 18:43:05.732674 kubelet[2424]: I0123 18:43:05.732628 2424 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:43:05.732674 kubelet[2424]: I0123 18:43:05.732672 2424 kubelet.go:352] "Adding apiserver pod source" Jan 23 18:43:05.732819 kubelet[2424]: I0123 18:43:05.732697 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:43:05.739683 kubelet[2424]: W0123 18:43:05.739526 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 23 18:43:05.739683 kubelet[2424]: E0123 18:43:05.739594 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:05.739683 kubelet[2424]: W0123 18:43:05.739602 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 23 18:43:05.739806 kubelet[2424]: E0123 18:43:05.739686 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:05.740044 kubelet[2424]: I0123 18:43:05.739962 2424 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:43:05.740681 kubelet[2424]: I0123 18:43:05.740600 2424 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 18:43:05.740735 kubelet[2424]: W0123 18:43:05.740719 2424 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:43:05.743767 kubelet[2424]: I0123 18:43:05.743706 2424 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:43:05.743877 kubelet[2424]: I0123 18:43:05.743838 2424 server.go:1287] "Started kubelet" Jan 23 18:43:05.744344 kubelet[2424]: I0123 18:43:05.744159 2424 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:43:05.744413 kubelet[2424]: I0123 18:43:05.744334 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:43:05.747439 kubelet[2424]: I0123 18:43:05.746958 2424 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:43:05.747569 kubelet[2424]: I0123 18:43:05.747477 2424 server.go:479] "Adding debug handlers to kubelet server" Jan 23 18:43:05.750328 kubelet[2424]: E0123 18:43:05.750224 2424 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:43:05.806584 kubelet[2424]: E0123 18:43:05.800045 2424 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d705fe2160a6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 18:43:05.743772267 +0000 UTC m=+1.197572583,LastTimestamp:2026-01-23 18:43:05.743772267 +0000 UTC m=+1.197572583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 18:43:05.821381 kubelet[2424]: I0123 18:43:05.819691 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:43:05.821381 kubelet[2424]: I0123 18:43:05.819820 2424 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:43:05.821381 kubelet[2424]: I0123 18:43:05.820009 2424 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:43:05.821381 kubelet[2424]: I0123 18:43:05.820664 2424 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:43:05.821381 kubelet[2424]: I0123 18:43:05.820784 2424 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:43:05.821381 kubelet[2424]: E0123 18:43:05.820911 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:05.823740 kubelet[2424]: E0123 18:43:05.823454 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Jan 23 18:43:05.824171 kubelet[2424]: I0123 18:43:05.824113 2424 factory.go:221] Registration of the systemd container factory successfully Jan 23 18:43:05.825087 kubelet[2424]: W0123 18:43:05.824967 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 23 18:43:05.825087 kubelet[2424]: E0123 18:43:05.825043 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:05.826172 kubelet[2424]: I0123 18:43:05.826099 2424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:43:05.828186 kubelet[2424]: I0123 18:43:05.828142 2424 factory.go:221] Registration of the containerd container factory successfully Jan 23 18:43:05.830000 audit[2438]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:05.830000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff06f5dda0 a2=0 a3=0 items=0 ppid=2424 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:43:05.832000 audit[2442]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:05.832000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc611df2c0 a2=0 a3=0 items=0 ppid=2424 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:43:05.837000 audit[2444]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:05.837000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe09c7c6d0 a2=0 a3=0 items=0 ppid=2424 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.837000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:43:05.843000 audit[2446]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:05.843000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc84633460 a2=0 a3=0 items=0 ppid=2424 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:43:05.854344 kubelet[2424]: I0123 18:43:05.853084 2424 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:43:05.854344 kubelet[2424]: I0123 18:43:05.853105 2424 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:43:05.854344 kubelet[2424]: I0123 18:43:05.853136 2424 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:43:05.854000 audit[2451]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:05.854000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffe1ea1900 a2=0 a3=0 items=0 ppid=2424 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 23 18:43:05.856597 kubelet[2424]: I0123 18:43:05.856555 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 18:43:05.857000 audit[2454]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:05.857000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc5a85d310 a2=0 a3=0 items=0 ppid=2424 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 18:43:05.857000 audit[2452]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:05.857000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdebe92e20 a2=0 a3=0 items=0 ppid=2424 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:43:05.859069 kubelet[2424]: I0123 18:43:05.858920 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 18:43:05.859069 kubelet[2424]: I0123 18:43:05.858990 2424 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 18:43:05.859069 kubelet[2424]: I0123 18:43:05.859026 2424 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:43:05.859069 kubelet[2424]: I0123 18:43:05.859036 2424 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 18:43:05.859158 kubelet[2424]: E0123 18:43:05.859131 2424 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:43:05.859000 audit[2456]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:05.859000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9fa8a1b0 a2=0 a3=0 items=0 ppid=2424 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:43:05.859000 audit[2455]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:05.859000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb4eb07f0 a2=0 a3=0 items=0 ppid=2424 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 18:43:05.861000 audit[2458]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:05.861000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde65b82d0 a2=0 a3=0 items=0 ppid=2424 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:43:05.862000 audit[2459]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:05.862000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0c93c760 a2=0 a3=0 items=0 ppid=2424 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 18:43:05.864000 audit[2460]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:05.864000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8d388300 a2=0 a3=0 items=0 ppid=2424 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:05.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 18:43:05.919782 kubelet[2424]: I0123 18:43:05.919401 2424 policy_none.go:49] "None policy: Start" Jan 23 18:43:05.919782 kubelet[2424]: I0123 18:43:05.919534 2424 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:43:05.919782 kubelet[2424]: I0123 18:43:05.919553 2424 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:43:05.929204 kubelet[2424]: E0123 18:43:05.922228 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:05.929204 kubelet[2424]: W0123 18:43:05.923093 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 23 18:43:05.929204 kubelet[2424]: E0123 18:43:05.923319 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:05.963074 kubelet[2424]: E0123 18:43:05.960827 2424 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 18:43:05.973676 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:43:06.023519 kubelet[2424]: E0123 18:43:06.023151 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 18:43:06.041691 kubelet[2424]: E0123 18:43:06.038549 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Jan 23 18:43:06.048236 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:43:06.113407 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:43:06.116908 kubelet[2424]: I0123 18:43:06.116824 2424 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 18:43:06.117411 kubelet[2424]: I0123 18:43:06.117311 2424 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:43:06.117480 kubelet[2424]: I0123 18:43:06.117367 2424 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:43:06.118282 kubelet[2424]: I0123 18:43:06.118166 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:43:06.120581 kubelet[2424]: E0123 18:43:06.120466 2424 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:43:06.120705 kubelet[2424]: E0123 18:43:06.120653 2424 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 18:43:06.175919 systemd[1]: Created slice kubepods-burstable-pod121deb0322f69690a1b4e374bf77690a.slice - libcontainer container kubepods-burstable-pod121deb0322f69690a1b4e374bf77690a.slice. Jan 23 18:43:06.206877 kubelet[2424]: E0123 18:43:06.206782 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:06.212670 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 23 18:43:06.215979 kubelet[2424]: E0123 18:43:06.215910 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:06.219550 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 23 18:43:06.219859 kubelet[2424]: I0123 18:43:06.219669 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:06.220655 kubelet[2424]: E0123 18:43:06.220575 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 23 18:43:06.222282 kubelet[2424]: E0123 18:43:06.222196 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:06.224535 kubelet[2424]: I0123 18:43:06.224471 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/121deb0322f69690a1b4e374bf77690a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"121deb0322f69690a1b4e374bf77690a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:06.224535 kubelet[2424]: I0123 18:43:06.224531 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:06.225014 kubelet[2424]: I0123 18:43:06.224564 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:06.225014 kubelet[2424]: I0123 18:43:06.224592 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:06.225014 kubelet[2424]: I0123 18:43:06.224621 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:43:06.225014 kubelet[2424]: I0123 18:43:06.224645 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/121deb0322f69690a1b4e374bf77690a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"121deb0322f69690a1b4e374bf77690a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:06.225014 kubelet[2424]: I0123 18:43:06.224677 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/121deb0322f69690a1b4e374bf77690a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"121deb0322f69690a1b4e374bf77690a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:06.225216 kubelet[2424]: I0123 18:43:06.224699 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:06.225216 kubelet[2424]: I0123 18:43:06.224716 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:06.427480 kubelet[2424]: I0123 18:43:06.427096 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:06.428868 kubelet[2424]: E0123 18:43:06.428135 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 23 18:43:06.441768 kubelet[2424]: E0123 18:43:06.441608 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Jan 23 18:43:06.509190 kubelet[2424]: E0123 18:43:06.508899 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:06.510599 containerd[1598]: time="2026-01-23T18:43:06.510531561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:121deb0322f69690a1b4e374bf77690a,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:06.517244 kubelet[2424]: E0123 18:43:06.517181 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:06.517647 containerd[1598]: time="2026-01-23T18:43:06.517610909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:06.523873 kubelet[2424]: E0123 18:43:06.523734 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:06.524294 containerd[1598]: time="2026-01-23T18:43:06.524208672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:06.618460 containerd[1598]: time="2026-01-23T18:43:06.618399587Z" level=info msg="connecting to shim 1d4bcd986744e7cfae9ac9428cb6343b8eeecbfe1b4a3a237ccf17d4fc414557" address="unix:///run/containerd/s/071cad94abd0f7357d15f4fdf9922a88196a9f88251a521cce0bf929ced0d53f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:06.623329 containerd[1598]: time="2026-01-23T18:43:06.622366451Z" level=info msg="connecting to shim df78f9a35569c3eef2f156f0521d3fdd9186940c12267ea19cbb87f06f9de54f" address="unix:///run/containerd/s/7fc1958758a2c233e7347e1a2a6d0b7ab6349285db66198f89d52b5699d1ec2b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:06.628980 containerd[1598]: time="2026-01-23T18:43:06.628895611Z" level=info msg="connecting to shim 36db14638a213154349a14c64861716b46a471c25c42121ef285ccbcd32758d0" address="unix:///run/containerd/s/ff2f6c0efc988a8aa626cb8c7a1ae6a859b352d2768f2706476325086387de23" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:06.659896 kubelet[2424]: W0123 18:43:06.659789 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 23 18:43:06.659896 kubelet[2424]: E0123 18:43:06.659885 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:06.663121 systemd[1]: Started cri-containerd-df78f9a35569c3eef2f156f0521d3fdd9186940c12267ea19cbb87f06f9de54f.scope - libcontainer container df78f9a35569c3eef2f156f0521d3fdd9186940c12267ea19cbb87f06f9de54f. Jan 23 18:43:06.669982 systemd[1]: Started cri-containerd-1d4bcd986744e7cfae9ac9428cb6343b8eeecbfe1b4a3a237ccf17d4fc414557.scope - libcontainer container 1d4bcd986744e7cfae9ac9428cb6343b8eeecbfe1b4a3a237ccf17d4fc414557. Jan 23 18:43:06.700512 systemd[1]: Started cri-containerd-36db14638a213154349a14c64861716b46a471c25c42121ef285ccbcd32758d0.scope - libcontainer container 36db14638a213154349a14c64861716b46a471c25c42121ef285ccbcd32758d0. Jan 23 18:43:06.704000 audit: BPF prog-id=83 op=LOAD Jan 23 18:43:06.709787 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 23 18:43:06.709871 kernel: audit: type=1334 audit(1769193786.704:345): prog-id=83 op=LOAD Jan 23 18:43:06.709000 audit: BPF prog-id=84 op=LOAD Jan 23 18:43:06.709000 audit[2510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.724323 kernel: audit: type=1334 audit(1769193786.709:346): prog-id=84 op=LOAD Jan 23 18:43:06.724389 kernel: audit: type=1300 audit(1769193786.709:346): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.724435 kernel: audit: type=1327 audit(1769193786.709:346): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.737342 kernel: audit: type=1334 audit(1769193786.709:347): prog-id=84 op=UNLOAD Jan 23 18:43:06.709000 audit: BPF prog-id=84 op=UNLOAD Jan 23 18:43:06.747482 kernel: audit: type=1300 audit(1769193786.709:347): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.709000 audit[2510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.750395 kubelet[2424]: W0123 18:43:06.749871 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 23 18:43:06.750395 kubelet[2424]: E0123 18:43:06.749926 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:06.758824 kernel: audit: type=1327 audit(1769193786.709:347): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.758896 kernel: audit: type=1334 audit(1769193786.709:348): prog-id=85 op=LOAD Jan 23 18:43:06.709000 audit: BPF prog-id=85 op=LOAD Jan 23 18:43:06.761356 kernel: audit: type=1300 audit(1769193786.709:348): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.709000 audit[2510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.782496 kernel: audit: type=1327 audit(1769193786.709:348): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.709000 audit: BPF prog-id=86 op=LOAD Jan 23 18:43:06.709000 audit[2510]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.709000 audit: BPF prog-id=86 op=UNLOAD Jan 23 18:43:06.709000 audit[2510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.709000 audit: BPF prog-id=85 op=UNLOAD Jan 23 18:43:06.709000 audit[2510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.709000 audit: BPF prog-id=87 op=LOAD Jan 23 18:43:06.709000 audit[2510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2479 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346263643938363734346537636661653961633934323863623633 Jan 23 18:43:06.710000 audit: BPF prog-id=88 op=LOAD Jan 23 18:43:06.711000 audit: BPF prog-id=89 op=LOAD Jan 23 18:43:06.711000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2477 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466373866396133353536396333656566326631353666303532316433 Jan 23 18:43:06.711000 audit: BPF prog-id=89 op=UNLOAD Jan 23 18:43:06.711000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466373866396133353536396333656566326631353666303532316433 Jan 23 18:43:06.711000 audit: BPF prog-id=90 op=LOAD Jan 23 18:43:06.711000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2477 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466373866396133353536396333656566326631353666303532316433 Jan 23 18:43:06.711000 audit: BPF prog-id=91 op=LOAD Jan 23 18:43:06.711000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2477 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466373866396133353536396333656566326631353666303532316433 Jan 23 18:43:06.711000 audit: BPF prog-id=91 op=UNLOAD Jan 23 18:43:06.711000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466373866396133353536396333656566326631353666303532316433 Jan 23 18:43:06.711000 audit: BPF prog-id=90 op=UNLOAD Jan 23 18:43:06.711000 audit[2508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466373866396133353536396333656566326631353666303532316433 Jan 23 18:43:06.711000 audit: BPF prog-id=92 op=LOAD Jan 23 18:43:06.711000 audit[2508]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2477 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466373866396133353536396333656566326631353666303532316433 Jan 23 18:43:06.721000 audit: BPF prog-id=93 op=LOAD Jan 23 18:43:06.722000 audit: BPF prog-id=94 op=LOAD Jan 23 18:43:06.722000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2505 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336646231343633386132313331353433343961313463363438363137 Jan 23 18:43:06.722000 audit: BPF prog-id=94 op=UNLOAD Jan 23 18:43:06.722000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2505 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336646231343633386132313331353433343961313463363438363137 Jan 23 18:43:06.722000 audit: BPF prog-id=95 op=LOAD Jan 23 18:43:06.722000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2505 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336646231343633386132313331353433343961313463363438363137 Jan 23 18:43:06.781000 audit: BPF prog-id=96 op=LOAD Jan 23 18:43:06.781000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2505 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336646231343633386132313331353433343961313463363438363137 Jan 23 18:43:06.781000 audit: BPF prog-id=96 op=UNLOAD Jan 23 18:43:06.781000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2505 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336646231343633386132313331353433343961313463363438363137 Jan 23 18:43:06.781000 audit: BPF prog-id=95 op=UNLOAD Jan 23 18:43:06.781000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2505 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336646231343633386132313331353433343961313463363438363137 Jan 23 18:43:06.781000 audit: BPF prog-id=97 op=LOAD Jan 23 18:43:06.781000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2505 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336646231343633386132313331353433343961313463363438363137 Jan 23 18:43:06.791145 containerd[1598]: time="2026-01-23T18:43:06.791087477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d4bcd986744e7cfae9ac9428cb6343b8eeecbfe1b4a3a237ccf17d4fc414557\"" Jan 23 18:43:06.792326 kubelet[2424]: E0123 18:43:06.792238 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:06.801905 containerd[1598]: time="2026-01-23T18:43:06.801857994Z" level=info msg="CreateContainer within sandbox \"1d4bcd986744e7cfae9ac9428cb6343b8eeecbfe1b4a3a237ccf17d4fc414557\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:43:06.802621 containerd[1598]: time="2026-01-23T18:43:06.802541670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:121deb0322f69690a1b4e374bf77690a,Namespace:kube-system,Attempt:0,} returns sandbox id \"df78f9a35569c3eef2f156f0521d3fdd9186940c12267ea19cbb87f06f9de54f\"" Jan 23 18:43:06.803973 kubelet[2424]: E0123 18:43:06.803924 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:06.808861 containerd[1598]: time="2026-01-23T18:43:06.808344260Z" level=info msg="CreateContainer within sandbox \"df78f9a35569c3eef2f156f0521d3fdd9186940c12267ea19cbb87f06f9de54f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:43:06.818046 containerd[1598]: time="2026-01-23T18:43:06.817973515Z" level=info msg="Container dfc0a446ad3858338770ca8da223a33a588ecaa4a06c3d69a7d059d930ea4319: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:06.821156 containerd[1598]: time="2026-01-23T18:43:06.821113502Z" level=info msg="Container 840e3ab563edbff49594cbf97440d418668f023121ef510031f3240e0c25c2e5: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:06.827478 containerd[1598]: time="2026-01-23T18:43:06.827424445Z" level=info msg="CreateContainer within sandbox \"1d4bcd986744e7cfae9ac9428cb6343b8eeecbfe1b4a3a237ccf17d4fc414557\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dfc0a446ad3858338770ca8da223a33a588ecaa4a06c3d69a7d059d930ea4319\"" Jan 23 18:43:06.828388 containerd[1598]: time="2026-01-23T18:43:06.828251650Z" level=info msg="StartContainer for \"dfc0a446ad3858338770ca8da223a33a588ecaa4a06c3d69a7d059d930ea4319\"" Jan 23 18:43:06.829452 containerd[1598]: time="2026-01-23T18:43:06.829351704Z" level=info msg="connecting to shim dfc0a446ad3858338770ca8da223a33a588ecaa4a06c3d69a7d059d930ea4319" address="unix:///run/containerd/s/071cad94abd0f7357d15f4fdf9922a88196a9f88251a521cce0bf929ced0d53f" protocol=ttrpc version=3 Jan 23 18:43:06.829626 kubelet[2424]: I0123 18:43:06.829538 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:06.830066 kubelet[2424]: E0123 18:43:06.830023 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jan 23 18:43:06.836325 containerd[1598]: time="2026-01-23T18:43:06.836197084Z" level=info msg="CreateContainer within sandbox \"df78f9a35569c3eef2f156f0521d3fdd9186940c12267ea19cbb87f06f9de54f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"840e3ab563edbff49594cbf97440d418668f023121ef510031f3240e0c25c2e5\"" Jan 23 18:43:06.836993 containerd[1598]: time="2026-01-23T18:43:06.836972905Z" level=info msg="StartContainer for \"840e3ab563edbff49594cbf97440d418668f023121ef510031f3240e0c25c2e5\"" Jan 23 18:43:06.838552 containerd[1598]: time="2026-01-23T18:43:06.838442458Z" level=info msg="connecting to shim 840e3ab563edbff49594cbf97440d418668f023121ef510031f3240e0c25c2e5" address="unix:///run/containerd/s/7fc1958758a2c233e7347e1a2a6d0b7ab6349285db66198f89d52b5699d1ec2b" protocol=ttrpc version=3 Jan 23 18:43:06.855609 containerd[1598]: time="2026-01-23T18:43:06.855540867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"36db14638a213154349a14c64861716b46a471c25c42121ef285ccbcd32758d0\"" Jan 23 18:43:06.857359 kubelet[2424]: E0123 18:43:06.857304 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:06.860057 containerd[1598]: time="2026-01-23T18:43:06.860033036Z" level=info msg="CreateContainer within sandbox \"36db14638a213154349a14c64861716b46a471c25c42121ef285ccbcd32758d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:43:06.866500 systemd[1]: Started cri-containerd-840e3ab563edbff49594cbf97440d418668f023121ef510031f3240e0c25c2e5.scope - libcontainer container 840e3ab563edbff49594cbf97440d418668f023121ef510031f3240e0c25c2e5. Jan 23 18:43:06.868898 systemd[1]: Started cri-containerd-dfc0a446ad3858338770ca8da223a33a588ecaa4a06c3d69a7d059d930ea4319.scope - libcontainer container dfc0a446ad3858338770ca8da223a33a588ecaa4a06c3d69a7d059d930ea4319. Jan 23 18:43:06.885899 containerd[1598]: time="2026-01-23T18:43:06.885839614Z" level=info msg="Container ef9a050d8cc5791e1972bb3fd262316ba27ee8f0c8377c9d839b99810c83766b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:06.896502 containerd[1598]: time="2026-01-23T18:43:06.896350718Z" level=info msg="CreateContainer within sandbox \"36db14638a213154349a14c64861716b46a471c25c42121ef285ccbcd32758d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef9a050d8cc5791e1972bb3fd262316ba27ee8f0c8377c9d839b99810c83766b\"" Jan 23 18:43:06.895000 audit: BPF prog-id=98 op=LOAD Jan 23 18:43:06.897798 containerd[1598]: time="2026-01-23T18:43:06.897730700Z" level=info msg="StartContainer for \"ef9a050d8cc5791e1972bb3fd262316ba27ee8f0c8377c9d839b99810c83766b\"" Jan 23 18:43:06.896000 audit: BPF prog-id=99 op=LOAD Jan 23 18:43:06.896000 audit[2594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2479 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633061343436616433383538333338373730636138646132323361 Jan 23 18:43:06.896000 audit: BPF prog-id=99 op=UNLOAD Jan 23 18:43:06.896000 audit[2594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633061343436616433383538333338373730636138646132323361 Jan 23 18:43:06.897000 audit: BPF prog-id=100 op=LOAD Jan 23 18:43:06.897000 audit[2594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2479 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633061343436616433383538333338373730636138646132323361 Jan 23 18:43:06.897000 audit: BPF prog-id=101 op=LOAD Jan 23 18:43:06.897000 audit[2594]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2479 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633061343436616433383538333338373730636138646132323361 Jan 23 18:43:06.897000 audit: BPF prog-id=101 op=UNLOAD Jan 23 18:43:06.897000 audit[2594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633061343436616433383538333338373730636138646132323361 Jan 23 18:43:06.897000 audit: BPF prog-id=100 op=UNLOAD Jan 23 18:43:06.897000 audit[2594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2479 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633061343436616433383538333338373730636138646132323361 Jan 23 18:43:06.899947 containerd[1598]: time="2026-01-23T18:43:06.899874977Z" level=info msg="connecting to shim ef9a050d8cc5791e1972bb3fd262316ba27ee8f0c8377c9d839b99810c83766b" address="unix:///run/containerd/s/ff2f6c0efc988a8aa626cb8c7a1ae6a859b352d2768f2706476325086387de23" protocol=ttrpc version=3 Jan 23 18:43:06.898000 audit: BPF prog-id=102 op=LOAD Jan 23 18:43:06.898000 audit[2594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2479 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466633061343436616433383538333338373730636138646132323361 Jan 23 18:43:06.901000 audit: BPF prog-id=103 op=LOAD Jan 23 18:43:06.902000 audit: BPF prog-id=104 op=LOAD Jan 23 18:43:06.902000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2477 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306533616235363365646266663439353934636266393734343064 Jan 23 18:43:06.902000 audit: BPF prog-id=104 op=UNLOAD Jan 23 18:43:06.902000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306533616235363365646266663439353934636266393734343064 Jan 23 18:43:06.902000 audit: BPF prog-id=105 op=LOAD Jan 23 18:43:06.902000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2477 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306533616235363365646266663439353934636266393734343064 Jan 23 18:43:06.902000 audit: BPF prog-id=106 op=LOAD Jan 23 18:43:06.902000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2477 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306533616235363365646266663439353934636266393734343064 Jan 23 18:43:06.902000 audit: BPF prog-id=106 op=UNLOAD Jan 23 18:43:06.902000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306533616235363365646266663439353934636266393734343064 Jan 23 18:43:06.902000 audit: BPF prog-id=105 op=UNLOAD Jan 23 18:43:06.902000 audit[2605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2477 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306533616235363365646266663439353934636266393734343064 Jan 23 18:43:06.902000 audit: BPF prog-id=107 op=LOAD Jan 23 18:43:06.902000 audit[2605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2477 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834306533616235363365646266663439353934636266393734343064 Jan 23 18:43:06.946705 systemd[1]: Started cri-containerd-ef9a050d8cc5791e1972bb3fd262316ba27ee8f0c8377c9d839b99810c83766b.scope - libcontainer container ef9a050d8cc5791e1972bb3fd262316ba27ee8f0c8377c9d839b99810c83766b. Jan 23 18:43:06.981000 audit: BPF prog-id=108 op=LOAD Jan 23 18:43:06.982000 audit: BPF prog-id=109 op=LOAD Jan 23 18:43:06.982000 audit[2637]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2505 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566396130353064386363353739316531393732626233666432363233 Jan 23 18:43:06.982000 audit: BPF prog-id=109 op=UNLOAD Jan 23 18:43:06.982000 audit[2637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2505 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566396130353064386363353739316531393732626233666432363233 Jan 23 18:43:06.982000 audit: BPF prog-id=110 op=LOAD Jan 23 18:43:06.982000 audit[2637]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2505 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566396130353064386363353739316531393732626233666432363233 Jan 23 18:43:06.983000 audit: BPF prog-id=111 op=LOAD Jan 23 18:43:06.983000 audit[2637]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2505 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566396130353064386363353739316531393732626233666432363233 Jan 23 18:43:06.983000 audit: BPF prog-id=111 op=UNLOAD Jan 23 18:43:06.983000 audit[2637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2505 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566396130353064386363353739316531393732626233666432363233 Jan 23 18:43:06.983000 audit: BPF prog-id=110 op=UNLOAD Jan 23 18:43:06.983000 audit[2637]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2505 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566396130353064386363353739316531393732626233666432363233 Jan 23 18:43:06.984000 audit: BPF prog-id=112 op=LOAD Jan 23 18:43:06.984000 audit[2637]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2505 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:06.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566396130353064386363353739316531393732626233666432363233 Jan 23 18:43:06.987595 containerd[1598]: time="2026-01-23T18:43:06.987104432Z" level=info msg="StartContainer for \"dfc0a446ad3858338770ca8da223a33a588ecaa4a06c3d69a7d059d930ea4319\" returns successfully" Jan 23 18:43:06.989528 containerd[1598]: time="2026-01-23T18:43:06.989433107Z" level=info msg="StartContainer for \"840e3ab563edbff49594cbf97440d418668f023121ef510031f3240e0c25c2e5\" returns successfully" Jan 23 18:43:07.104557 kubelet[2424]: W0123 18:43:07.088871 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 23 18:43:07.104557 kubelet[2424]: E0123 18:43:07.089195 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:07.187677 kubelet[2424]: W0123 18:43:07.186862 2424 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jan 23 18:43:07.187677 kubelet[2424]: E0123 18:43:07.187228 2424 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:43:07.513454 containerd[1598]: time="2026-01-23T18:43:07.513350518Z" level=info msg="StartContainer for \"ef9a050d8cc5791e1972bb3fd262316ba27ee8f0c8377c9d839b99810c83766b\" returns successfully" Jan 23 18:43:07.514757 update_engine[1579]: I20260123 18:43:07.514596 1579 update_attempter.cc:509] Updating boot flags... Jan 23 18:43:07.638362 kubelet[2424]: I0123 18:43:07.638302 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:07.953103 kubelet[2424]: E0123 18:43:07.953006 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:07.956319 kubelet[2424]: E0123 18:43:07.955651 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:07.956319 kubelet[2424]: E0123 18:43:07.955941 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:07.961971 kubelet[2424]: E0123 18:43:07.959455 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:07.965514 kubelet[2424]: E0123 18:43:07.965494 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:07.965864 kubelet[2424]: E0123 18:43:07.965846 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:09.127802 kubelet[2424]: E0123 18:43:09.127645 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:09.127802 kubelet[2424]: E0123 18:43:09.127882 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:09.131878 kubelet[2424]: E0123 18:43:09.131832 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:09.133667 kubelet[2424]: E0123 18:43:09.133643 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:09.133939 kubelet[2424]: E0123 18:43:09.133923 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:09.134051 kubelet[2424]: E0123 18:43:09.133254 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:10.130596 kubelet[2424]: E0123 18:43:10.130421 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:10.130596 kubelet[2424]: E0123 18:43:10.130582 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:10.131967 kubelet[2424]: E0123 18:43:10.131025 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:10.131967 kubelet[2424]: E0123 18:43:10.131130 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:11.153408 kubelet[2424]: E0123 18:43:11.153051 2424 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 18:43:11.153408 kubelet[2424]: E0123 18:43:11.153626 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:11.546750 kubelet[2424]: E0123 18:43:11.544581 2424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 18:43:11.658084 kubelet[2424]: I0123 18:43:11.657976 2424 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:43:11.724309 kubelet[2424]: I0123 18:43:11.722353 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:11.863526 kubelet[2424]: E0123 18:43:11.862566 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:11.865336 kubelet[2424]: I0123 18:43:11.864799 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:11.868060 kubelet[2424]: E0123 18:43:11.867975 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:11.868060 kubelet[2424]: I0123 18:43:11.868050 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:43:11.869888 kubelet[2424]: E0123 18:43:11.869805 2424 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 18:43:12.183090 kubelet[2424]: I0123 18:43:12.159232 2424 apiserver.go:52] "Watching apiserver" Jan 23 18:43:12.221980 kubelet[2424]: I0123 18:43:12.221889 2424 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:43:13.551957 kubelet[2424]: I0123 18:43:13.551769 2424 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:13.562705 kubelet[2424]: E0123 18:43:13.562641 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:14.216542 kubelet[2424]: E0123 18:43:14.216186 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:14.616309 systemd[1]: Reload requested from client PID 2722 ('systemctl') (unit session-8.scope)... Jan 23 18:43:14.616385 systemd[1]: Reloading... Jan 23 18:43:14.768382 zram_generator::config[2768]: No configuration found. Jan 23 18:43:15.294796 systemd[1]: Reloading finished in 676 ms. Jan 23 18:43:15.330447 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:15.345021 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:43:15.345488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:15.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:15.345588 systemd[1]: kubelet.service: Consumed 2.837s CPU time, 132.3M memory peak. Jan 23 18:43:15.347335 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 23 18:43:15.347404 kernel: audit: type=1131 audit(1769193795.344:393): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:15.348353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:43:15.363351 kernel: audit: type=1334 audit(1769193795.346:394): prog-id=113 op=LOAD Jan 23 18:43:15.363440 kernel: audit: type=1334 audit(1769193795.346:395): prog-id=74 op=UNLOAD Jan 23 18:43:15.363477 kernel: audit: type=1334 audit(1769193795.346:396): prog-id=114 op=LOAD Jan 23 18:43:15.363543 kernel: audit: type=1334 audit(1769193795.346:397): prog-id=115 op=LOAD Jan 23 18:43:15.363575 kernel: audit: type=1334 audit(1769193795.346:398): prog-id=75 op=UNLOAD Jan 23 18:43:15.363612 kernel: audit: type=1334 audit(1769193795.346:399): prog-id=76 op=UNLOAD Jan 23 18:43:15.363641 kernel: audit: type=1334 audit(1769193795.346:400): prog-id=116 op=LOAD Jan 23 18:43:15.363661 kernel: audit: type=1334 audit(1769193795.346:401): prog-id=66 op=UNLOAD Jan 23 18:43:15.363698 kernel: audit: type=1334 audit(1769193795.351:402): prog-id=117 op=LOAD Jan 23 18:43:15.346000 audit: BPF prog-id=113 op=LOAD Jan 23 18:43:15.346000 audit: BPF prog-id=74 op=UNLOAD Jan 23 18:43:15.346000 audit: BPF prog-id=114 op=LOAD Jan 23 18:43:15.346000 audit: BPF prog-id=115 op=LOAD Jan 23 18:43:15.346000 audit: BPF prog-id=75 op=UNLOAD Jan 23 18:43:15.346000 audit: BPF prog-id=76 op=UNLOAD Jan 23 18:43:15.346000 audit: BPF prog-id=116 op=LOAD Jan 23 18:43:15.346000 audit: BPF prog-id=66 op=UNLOAD Jan 23 18:43:15.351000 audit: BPF prog-id=117 op=LOAD Jan 23 18:43:15.351000 audit: BPF prog-id=71 op=UNLOAD Jan 23 18:43:15.351000 audit: BPF prog-id=118 op=LOAD Jan 23 18:43:15.351000 audit: BPF prog-id=119 op=LOAD Jan 23 18:43:15.351000 audit: BPF prog-id=72 op=UNLOAD Jan 23 18:43:15.351000 audit: BPF prog-id=73 op=UNLOAD Jan 23 18:43:15.352000 audit: BPF prog-id=120 op=LOAD Jan 23 18:43:15.352000 audit: BPF prog-id=67 op=UNLOAD Jan 23 18:43:15.352000 audit: BPF prog-id=121 op=LOAD Jan 23 18:43:15.352000 audit: BPF prog-id=122 op=LOAD Jan 23 18:43:15.352000 audit: BPF prog-id=68 op=UNLOAD Jan 23 18:43:15.352000 audit: BPF prog-id=69 op=UNLOAD Jan 23 18:43:15.354000 audit: BPF prog-id=123 op=LOAD Jan 23 18:43:15.354000 audit: BPF prog-id=82 op=UNLOAD Jan 23 18:43:15.357000 audit: BPF prog-id=124 op=LOAD Jan 23 18:43:15.357000 audit: BPF prog-id=77 op=UNLOAD Jan 23 18:43:15.357000 audit: BPF prog-id=125 op=LOAD Jan 23 18:43:15.357000 audit: BPF prog-id=126 op=LOAD Jan 23 18:43:15.357000 audit: BPF prog-id=78 op=UNLOAD Jan 23 18:43:15.357000 audit: BPF prog-id=79 op=UNLOAD Jan 23 18:43:15.359000 audit: BPF prog-id=127 op=LOAD Jan 23 18:43:15.359000 audit: BPF prog-id=128 op=LOAD Jan 23 18:43:15.359000 audit: BPF prog-id=80 op=UNLOAD Jan 23 18:43:15.359000 audit: BPF prog-id=81 op=UNLOAD Jan 23 18:43:15.404000 audit: BPF prog-id=129 op=LOAD Jan 23 18:43:15.404000 audit: BPF prog-id=70 op=UNLOAD Jan 23 18:43:15.406000 audit: BPF prog-id=130 op=LOAD Jan 23 18:43:15.406000 audit: BPF prog-id=63 op=UNLOAD Jan 23 18:43:15.406000 audit: BPF prog-id=131 op=LOAD Jan 23 18:43:15.406000 audit: BPF prog-id=132 op=LOAD Jan 23 18:43:15.406000 audit: BPF prog-id=64 op=UNLOAD Jan 23 18:43:15.406000 audit: BPF prog-id=65 op=UNLOAD Jan 23 18:43:15.807930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:43:15.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:15.840900 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:43:15.989137 kubelet[2814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:43:15.989137 kubelet[2814]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:43:15.989137 kubelet[2814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:43:15.989864 kubelet[2814]: I0123 18:43:15.989257 2814 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:43:16.010339 kubelet[2814]: I0123 18:43:16.010210 2814 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 18:43:16.010747 kubelet[2814]: I0123 18:43:16.010691 2814 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:43:16.011214 kubelet[2814]: I0123 18:43:16.011147 2814 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 18:43:16.015836 kubelet[2814]: I0123 18:43:16.015722 2814 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 18:43:16.018609 kubelet[2814]: I0123 18:43:16.018481 2814 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:43:16.026396 kubelet[2814]: I0123 18:43:16.026339 2814 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:43:16.032734 kubelet[2814]: I0123 18:43:16.032667 2814 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:43:16.033180 kubelet[2814]: I0123 18:43:16.033100 2814 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:43:16.034297 kubelet[2814]: I0123 18:43:16.033170 2814 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:43:16.034297 kubelet[2814]: I0123 18:43:16.033663 2814 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:43:16.034297 kubelet[2814]: I0123 18:43:16.033675 2814 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 18:43:16.034297 kubelet[2814]: I0123 18:43:16.033777 2814 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:43:16.034297 kubelet[2814]: I0123 18:43:16.034111 2814 kubelet.go:446] "Attempting to sync node with API server" Jan 23 18:43:16.034590 kubelet[2814]: I0123 18:43:16.034203 2814 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:43:16.034590 kubelet[2814]: I0123 18:43:16.034232 2814 kubelet.go:352] "Adding apiserver pod source" Jan 23 18:43:16.034590 kubelet[2814]: I0123 18:43:16.034244 2814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:43:16.036902 kubelet[2814]: I0123 18:43:16.036873 2814 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 18:43:16.037419 kubelet[2814]: I0123 18:43:16.037402 2814 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 18:43:16.039257 kubelet[2814]: I0123 18:43:16.039231 2814 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:43:16.039449 kubelet[2814]: I0123 18:43:16.039424 2814 server.go:1287] "Started kubelet" Jan 23 18:43:16.048171 kubelet[2814]: I0123 18:43:16.047609 2814 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:43:16.049012 kubelet[2814]: I0123 18:43:16.048887 2814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:43:16.049656 kubelet[2814]: I0123 18:43:16.049554 2814 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:43:16.049844 kubelet[2814]: I0123 18:43:16.049714 2814 server.go:479] "Adding debug handlers to kubelet server" Jan 23 18:43:16.058455 kubelet[2814]: I0123 18:43:16.058220 2814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:43:16.059542 kubelet[2814]: E0123 18:43:16.058925 2814 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:43:16.059614 kubelet[2814]: I0123 18:43:16.059595 2814 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:43:16.060932 kubelet[2814]: I0123 18:43:16.060910 2814 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:43:16.061084 kubelet[2814]: I0123 18:43:16.061025 2814 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:43:16.062413 kubelet[2814]: I0123 18:43:16.062385 2814 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:43:16.065022 kubelet[2814]: I0123 18:43:16.064623 2814 factory.go:221] Registration of the systemd container factory successfully Jan 23 18:43:16.065022 kubelet[2814]: I0123 18:43:16.064765 2814 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:43:16.070773 kubelet[2814]: I0123 18:43:16.069694 2814 factory.go:221] Registration of the containerd container factory successfully Jan 23 18:43:16.116645 kubelet[2814]: I0123 18:43:16.116537 2814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 18:43:16.123163 kubelet[2814]: I0123 18:43:16.123105 2814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 18:43:16.123163 kubelet[2814]: I0123 18:43:16.123146 2814 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 18:43:16.123163 kubelet[2814]: I0123 18:43:16.123163 2814 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:43:16.123163 kubelet[2814]: I0123 18:43:16.123170 2814 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 18:43:16.123404 kubelet[2814]: E0123 18:43:16.123301 2814 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:43:16.160793 kubelet[2814]: I0123 18:43:16.160662 2814 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:43:16.160793 kubelet[2814]: I0123 18:43:16.160709 2814 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:43:16.160793 kubelet[2814]: I0123 18:43:16.160748 2814 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:43:16.161853 kubelet[2814]: I0123 18:43:16.161082 2814 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:43:16.161853 kubelet[2814]: I0123 18:43:16.161111 2814 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:43:16.161853 kubelet[2814]: I0123 18:43:16.161169 2814 policy_none.go:49] "None policy: Start" Jan 23 18:43:16.161853 kubelet[2814]: I0123 18:43:16.161209 2814 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:43:16.161853 kubelet[2814]: I0123 18:43:16.161392 2814 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:43:16.162051 kubelet[2814]: I0123 18:43:16.161861 2814 state_mem.go:75] "Updated machine memory state" Jan 23 18:43:16.174432 kubelet[2814]: I0123 18:43:16.174127 2814 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 18:43:16.176917 kubelet[2814]: I0123 18:43:16.175590 2814 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:43:16.176917 kubelet[2814]: I0123 18:43:16.175666 2814 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:43:16.178999 kubelet[2814]: I0123 18:43:16.178849 2814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:43:16.179216 kubelet[2814]: E0123 18:43:16.179090 2814 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:43:16.228373 kubelet[2814]: I0123 18:43:16.225717 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 18:43:16.228373 kubelet[2814]: I0123 18:43:16.226751 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:16.228373 kubelet[2814]: I0123 18:43:16.227599 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:16.240015 kubelet[2814]: E0123 18:43:16.239863 2814 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:16.318216 kubelet[2814]: I0123 18:43:16.315485 2814 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 18:43:16.327102 kubelet[2814]: I0123 18:43:16.327040 2814 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 18:43:16.327247 kubelet[2814]: I0123 18:43:16.327127 2814 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 18:43:16.364589 kubelet[2814]: I0123 18:43:16.364505 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/121deb0322f69690a1b4e374bf77690a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"121deb0322f69690a1b4e374bf77690a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:16.364589 kubelet[2814]: I0123 18:43:16.364576 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:16.364589 kubelet[2814]: I0123 18:43:16.364599 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:16.364830 kubelet[2814]: I0123 18:43:16.364615 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:16.364830 kubelet[2814]: I0123 18:43:16.364632 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 23 18:43:16.364830 kubelet[2814]: I0123 18:43:16.364645 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/121deb0322f69690a1b4e374bf77690a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"121deb0322f69690a1b4e374bf77690a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:16.364830 kubelet[2814]: I0123 18:43:16.364657 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:16.364830 kubelet[2814]: I0123 18:43:16.364670 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 18:43:16.364961 kubelet[2814]: I0123 18:43:16.364683 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/121deb0322f69690a1b4e374bf77690a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"121deb0322f69690a1b4e374bf77690a\") " pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:16.740446 kubelet[2814]: E0123 18:43:16.739081 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:16.740446 kubelet[2814]: E0123 18:43:16.739244 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:16.740446 kubelet[2814]: E0123 18:43:16.740255 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:17.060806 kubelet[2814]: I0123 18:43:17.040021 2814 apiserver.go:52] "Watching apiserver" Jan 23 18:43:17.152060 kubelet[2814]: I0123 18:43:17.151691 2814 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:17.153635 kubelet[2814]: E0123 18:43:17.153538 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:17.158391 kubelet[2814]: E0123 18:43:17.156758 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:17.161932 kubelet[2814]: I0123 18:43:17.161754 2814 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:43:17.189995 kubelet[2814]: E0123 18:43:17.189792 2814 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 18:43:17.190704 kubelet[2814]: E0123 18:43:17.190104 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:17.261884 kubelet[2814]: I0123 18:43:17.261786 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.261738129 podStartE2EDuration="1.261738129s" podCreationTimestamp="2026-01-23 18:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:43:17.261305128 +0000 UTC m=+1.398874311" watchObservedRunningTime="2026-01-23 18:43:17.261738129 +0000 UTC m=+1.399307312" Jan 23 18:43:17.262188 kubelet[2814]: I0123 18:43:17.261939 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.261931241 podStartE2EDuration="1.261931241s" podCreationTimestamp="2026-01-23 18:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:43:17.238499234 +0000 UTC m=+1.376068397" watchObservedRunningTime="2026-01-23 18:43:17.261931241 +0000 UTC m=+1.399500414" Jan 23 18:43:17.290329 kubelet[2814]: I0123 18:43:17.289797 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.2897662 podStartE2EDuration="4.2897662s" podCreationTimestamp="2026-01-23 18:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:43:17.28841298 +0000 UTC m=+1.425982143" watchObservedRunningTime="2026-01-23 18:43:17.2897662 +0000 UTC m=+1.427335363" Jan 23 18:43:18.153632 kubelet[2814]: E0123 18:43:18.153463 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:18.153632 kubelet[2814]: E0123 18:43:18.153562 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:18.378101 kubelet[2814]: I0123 18:43:18.377994 2814 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:43:18.379149 containerd[1598]: time="2026-01-23T18:43:18.378962558Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:43:18.379973 kubelet[2814]: I0123 18:43:18.379907 2814 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:43:19.111470 systemd[1]: Created slice kubepods-besteffort-podaf945b63_74a8_4ba8_9c7f_abf5c36dd0e0.slice - libcontainer container kubepods-besteffort-podaf945b63_74a8_4ba8_9c7f_abf5c36dd0e0.slice. Jan 23 18:43:19.155653 kubelet[2814]: E0123 18:43:19.155570 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:19.186023 kubelet[2814]: I0123 18:43:19.185738 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af945b63-74a8-4ba8-9c7f-abf5c36dd0e0-kube-proxy\") pod \"kube-proxy-jg8cd\" (UID: \"af945b63-74a8-4ba8-9c7f-abf5c36dd0e0\") " pod="kube-system/kube-proxy-jg8cd" Jan 23 18:43:19.186023 kubelet[2814]: I0123 18:43:19.185797 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af945b63-74a8-4ba8-9c7f-abf5c36dd0e0-xtables-lock\") pod \"kube-proxy-jg8cd\" (UID: \"af945b63-74a8-4ba8-9c7f-abf5c36dd0e0\") " pod="kube-system/kube-proxy-jg8cd" Jan 23 18:43:19.186023 kubelet[2814]: I0123 18:43:19.185820 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af945b63-74a8-4ba8-9c7f-abf5c36dd0e0-lib-modules\") pod \"kube-proxy-jg8cd\" (UID: \"af945b63-74a8-4ba8-9c7f-abf5c36dd0e0\") " pod="kube-system/kube-proxy-jg8cd" Jan 23 18:43:19.186023 kubelet[2814]: I0123 18:43:19.185848 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sftf\" (UniqueName: \"kubernetes.io/projected/af945b63-74a8-4ba8-9c7f-abf5c36dd0e0-kube-api-access-5sftf\") pod \"kube-proxy-jg8cd\" (UID: \"af945b63-74a8-4ba8-9c7f-abf5c36dd0e0\") " pod="kube-system/kube-proxy-jg8cd" Jan 23 18:43:19.423300 kubelet[2814]: E0123 18:43:19.423190 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:19.424605 containerd[1598]: time="2026-01-23T18:43:19.424072137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jg8cd,Uid:af945b63-74a8-4ba8-9c7f-abf5c36dd0e0,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:19.491244 containerd[1598]: time="2026-01-23T18:43:19.491092422Z" level=info msg="connecting to shim 7b30c71ad14c9a1ecdad0703846e847b78caa0b14c7dd8551fbbf82ca7cedaf7" address="unix:///run/containerd/s/e1134cd1494a23a4e3a1c30bba2a91503113a0dd12a76c2a5bb2b6b6f1968a15" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:19.544785 kubelet[2814]: W0123 18:43:19.544663 2814 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 23 18:43:19.544785 kubelet[2814]: E0123 18:43:19.544726 2814 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 23 18:43:19.544785 kubelet[2814]: W0123 18:43:19.544787 2814 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jan 23 18:43:19.545018 kubelet[2814]: E0123 18:43:19.544804 2814 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 23 18:43:19.545018 kubelet[2814]: I0123 18:43:19.544844 2814 status_manager.go:890] "Failed to get status for pod" podUID="a5dafc23-4e24-4905-a52b-c72055bbc49b" pod="tigera-operator/tigera-operator-7dcd859c48-tgwcg" err="pods \"tigera-operator-7dcd859c48-tgwcg\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" Jan 23 18:43:19.560669 systemd[1]: Created slice kubepods-besteffort-poda5dafc23_4e24_4905_a52b_c72055bbc49b.slice - libcontainer container kubepods-besteffort-poda5dafc23_4e24_4905_a52b_c72055bbc49b.slice. Jan 23 18:43:19.600598 systemd[1]: Started cri-containerd-7b30c71ad14c9a1ecdad0703846e847b78caa0b14c7dd8551fbbf82ca7cedaf7.scope - libcontainer container 7b30c71ad14c9a1ecdad0703846e847b78caa0b14c7dd8551fbbf82ca7cedaf7. Jan 23 18:43:19.619000 audit: BPF prog-id=133 op=LOAD Jan 23 18:43:19.620000 audit: BPF prog-id=134 op=LOAD Jan 23 18:43:19.620000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333063373161643134633961316563646164303730333834366538 Jan 23 18:43:19.620000 audit: BPF prog-id=134 op=UNLOAD Jan 23 18:43:19.620000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333063373161643134633961316563646164303730333834366538 Jan 23 18:43:19.620000 audit: BPF prog-id=135 op=LOAD Jan 23 18:43:19.620000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333063373161643134633961316563646164303730333834366538 Jan 23 18:43:19.620000 audit: BPF prog-id=136 op=LOAD Jan 23 18:43:19.620000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333063373161643134633961316563646164303730333834366538 Jan 23 18:43:19.620000 audit: BPF prog-id=136 op=UNLOAD Jan 23 18:43:19.620000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333063373161643134633961316563646164303730333834366538 Jan 23 18:43:19.620000 audit: BPF prog-id=135 op=UNLOAD Jan 23 18:43:19.620000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333063373161643134633961316563646164303730333834366538 Jan 23 18:43:19.620000 audit: BPF prog-id=137 op=LOAD Jan 23 18:43:19.620000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762333063373161643134633961316563646164303730333834366538 Jan 23 18:43:19.644963 containerd[1598]: time="2026-01-23T18:43:19.644908874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jg8cd,Uid:af945b63-74a8-4ba8-9c7f-abf5c36dd0e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b30c71ad14c9a1ecdad0703846e847b78caa0b14c7dd8551fbbf82ca7cedaf7\"" Jan 23 18:43:19.646484 kubelet[2814]: E0123 18:43:19.646444 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:19.650413 containerd[1598]: time="2026-01-23T18:43:19.649537429Z" level=info msg="CreateContainer within sandbox \"7b30c71ad14c9a1ecdad0703846e847b78caa0b14c7dd8551fbbf82ca7cedaf7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:43:19.665460 containerd[1598]: time="2026-01-23T18:43:19.665360036Z" level=info msg="Container f0edc21442385df39ff40363f634db45c61fd46b6038c32c9fb61a1ecc579b0f: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:19.682832 containerd[1598]: time="2026-01-23T18:43:19.682637634Z" level=info msg="CreateContainer within sandbox \"7b30c71ad14c9a1ecdad0703846e847b78caa0b14c7dd8551fbbf82ca7cedaf7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0edc21442385df39ff40363f634db45c61fd46b6038c32c9fb61a1ecc579b0f\"" Jan 23 18:43:19.683967 containerd[1598]: time="2026-01-23T18:43:19.683914862Z" level=info msg="StartContainer for \"f0edc21442385df39ff40363f634db45c61fd46b6038c32c9fb61a1ecc579b0f\"" Jan 23 18:43:19.686381 containerd[1598]: time="2026-01-23T18:43:19.686236278Z" level=info msg="connecting to shim f0edc21442385df39ff40363f634db45c61fd46b6038c32c9fb61a1ecc579b0f" address="unix:///run/containerd/s/e1134cd1494a23a4e3a1c30bba2a91503113a0dd12a76c2a5bb2b6b6f1968a15" protocol=ttrpc version=3 Jan 23 18:43:19.690450 kubelet[2814]: I0123 18:43:19.690366 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5dafc23-4e24-4905-a52b-c72055bbc49b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-tgwcg\" (UID: \"a5dafc23-4e24-4905-a52b-c72055bbc49b\") " pod="tigera-operator/tigera-operator-7dcd859c48-tgwcg" Jan 23 18:43:19.690532 kubelet[2814]: I0123 18:43:19.690466 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjr8\" (UniqueName: \"kubernetes.io/projected/a5dafc23-4e24-4905-a52b-c72055bbc49b-kube-api-access-4kjr8\") pod \"tigera-operator-7dcd859c48-tgwcg\" (UID: \"a5dafc23-4e24-4905-a52b-c72055bbc49b\") " pod="tigera-operator/tigera-operator-7dcd859c48-tgwcg" Jan 23 18:43:19.723656 systemd[1]: Started cri-containerd-f0edc21442385df39ff40363f634db45c61fd46b6038c32c9fb61a1ecc579b0f.scope - libcontainer container f0edc21442385df39ff40363f634db45c61fd46b6038c32c9fb61a1ecc579b0f. Jan 23 18:43:19.844000 audit: BPF prog-id=138 op=LOAD Jan 23 18:43:19.844000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2876 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630656463323134343233383564663339666634303336336636333464 Jan 23 18:43:19.844000 audit: BPF prog-id=139 op=LOAD Jan 23 18:43:19.844000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2876 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630656463323134343233383564663339666634303336336636333464 Jan 23 18:43:19.844000 audit: BPF prog-id=139 op=UNLOAD Jan 23 18:43:19.844000 audit[2912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630656463323134343233383564663339666634303336336636333464 Jan 23 18:43:19.844000 audit: BPF prog-id=138 op=UNLOAD Jan 23 18:43:19.844000 audit[2912]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630656463323134343233383564663339666634303336336636333464 Jan 23 18:43:19.844000 audit: BPF prog-id=140 op=LOAD Jan 23 18:43:19.844000 audit[2912]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2876 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:19.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630656463323134343233383564663339666634303336336636333464 Jan 23 18:43:19.904437 containerd[1598]: time="2026-01-23T18:43:19.904364525Z" level=info msg="StartContainer for \"f0edc21442385df39ff40363f634db45c61fd46b6038c32c9fb61a1ecc579b0f\" returns successfully" Jan 23 18:43:20.161046 kubelet[2814]: E0123 18:43:20.160937 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:20.164746 kubelet[2814]: E0123 18:43:20.164617 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:20.201963 kubelet[2814]: I0123 18:43:20.201798 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jg8cd" podStartSLOduration=1.20178041 podStartE2EDuration="1.20178041s" podCreationTimestamp="2026-01-23 18:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:43:20.18848524 +0000 UTC m=+4.326054413" watchObservedRunningTime="2026-01-23 18:43:20.20178041 +0000 UTC m=+4.339349573" Jan 23 18:43:20.263000 audit[2977]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.263000 audit[2977]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8c5479b0 a2=0 a3=7fff8c54799c items=0 ppid=2926 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:43:20.264000 audit[2978]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.264000 audit[2978]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2c54b340 a2=0 a3=7ffe2c54b32c items=0 ppid=2926 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 18:43:20.266000 audit[2979]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.266000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa90eb810 a2=0 a3=7fffa90eb7fc items=0 ppid=2926 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:43:20.266000 audit[2980]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.266000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd605e80f0 a2=0 a3=7ffd605e80dc items=0 ppid=2926 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 18:43:20.274000 audit[2982]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.274000 audit[2982]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1ebc80a0 a2=0 a3=7ffd1ebc808c items=0 ppid=2926 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:43:20.276000 audit[2983]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.276000 audit[2983]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd14affa10 a2=0 a3=7ffd14aff9fc items=0 ppid=2926 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 18:43:20.398928 kernel: kauditd_printk_skb: 87 callbacks suppressed Jan 23 18:43:20.399686 kernel: audit: type=1325 audit(1769193800.390:454): table=filter:60 family=2 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.390000 audit[2985]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.390000 audit[2985]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdbb07a5d0 a2=0 a3=7ffdbb07a5bc items=0 ppid=2926 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:43:20.417087 kernel: audit: type=1300 audit(1769193800.390:454): arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdbb07a5d0 a2=0 a3=7ffdbb07a5bc items=0 ppid=2926 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.417204 kernel: audit: type=1327 audit(1769193800.390:454): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:43:20.405000 audit[2987]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.405000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff90fae790 a2=0 a3=7fff90fae77c items=0 ppid=2926 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.433213 kernel: audit: type=1325 audit(1769193800.405:455): table=filter:61 family=2 entries=1 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.433391 kernel: audit: type=1300 audit(1769193800.405:455): arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff90fae790 a2=0 a3=7fff90fae77c items=0 ppid=2926 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.433485 kernel: audit: type=1327 audit(1769193800.405:455): proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 23 18:43:20.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 23 18:43:20.415000 audit[2990]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.446822 kernel: audit: type=1325 audit(1769193800.415:456): table=filter:62 family=2 entries=1 op=nft_register_rule pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.446883 kernel: audit: type=1300 audit(1769193800.415:456): arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffb8dcc980 a2=0 a3=7fffb8dcc96c items=0 ppid=2926 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.415000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffb8dcc980 a2=0 a3=7fffb8dcc96c items=0 ppid=2926 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 23 18:43:20.466603 kernel: audit: type=1327 audit(1769193800.415:456): proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 23 18:43:20.466856 kernel: audit: type=1325 audit(1769193800.418:457): table=filter:63 family=2 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.418000 audit[2991]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.418000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeed1cb2f0 a2=0 a3=7ffeed1cb2dc items=0 ppid=2926 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:43:20.424000 audit[2993]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.424000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdcd1d2bd0 a2=0 a3=7ffdcd1d2bbc items=0 ppid=2926 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:43:20.426000 audit[2994]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.426000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6f52afb0 a2=0 a3=7fff6f52af9c items=0 ppid=2926 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.426000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:43:20.432000 audit[2996]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.432000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd43e1f0b0 a2=0 a3=7ffd43e1f09c items=0 ppid=2926 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.432000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 18:43:20.441000 audit[2999]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.441000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe558846a0 a2=0 a3=7ffe5588468c items=0 ppid=2926 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 23 18:43:20.443000 audit[3000]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.443000 audit[3000]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd10499eb0 a2=0 a3=7ffd10499e9c items=0 ppid=2926 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:43:20.449000 audit[3002]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.449000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee4ff7570 a2=0 a3=7ffee4ff755c items=0 ppid=2926 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:43:20.451000 audit[3003]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.451000 audit[3003]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff96747550 a2=0 a3=7fff9674753c items=0 ppid=2926 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:43:20.459000 audit[3005]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.459000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6b18e5f0 a2=0 a3=7ffe6b18e5dc items=0 ppid=2926 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:43:20.471000 audit[3008]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.471000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0b0cb2a0 a2=0 a3=7ffc0b0cb28c items=0 ppid=2926 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:43:20.482000 audit[3011]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.482000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda8a96480 a2=0 a3=7ffda8a9646c items=0 ppid=2926 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.482000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 18:43:20.484000 audit[3012]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.484000 audit[3012]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe7ba47f70 a2=0 a3=7ffe7ba47f5c items=0 ppid=2926 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:43:20.490000 audit[3014]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.490000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdb30ad9f0 a2=0 a3=7ffdb30ad9dc items=0 ppid=2926 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:43:20.498000 audit[3017]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.498000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeee130f10 a2=0 a3=7ffeee130efc items=0 ppid=2926 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:43:20.501000 audit[3018]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.501000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1403b840 a2=0 a3=7ffe1403b82c items=0 ppid=2926 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:43:20.507000 audit[3020]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 18:43:20.507000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffea8c6f6a0 a2=0 a3=7ffea8c6f68c items=0 ppid=2926 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:43:20.542000 audit[3026]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:20.542000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca8f394b0 a2=0 a3=7ffca8f3949c items=0 ppid=2926 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.542000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:20.553000 audit[3026]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:20.553000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffca8f394b0 a2=0 a3=7ffca8f3949c items=0 ppid=2926 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:20.556000 audit[3031]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.556000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd085899c0 a2=0 a3=7ffd085899ac items=0 ppid=2926 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 18:43:20.561000 audit[3033]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.561000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdea8474a0 a2=0 a3=7ffdea84748c items=0 ppid=2926 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.561000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 23 18:43:20.568000 audit[3036]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.568000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffccb739720 a2=0 a3=7ffccb73970c items=0 ppid=2926 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 23 18:43:20.571000 audit[3037]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.571000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff23cde330 a2=0 a3=7fff23cde31c items=0 ppid=2926 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 18:43:20.579000 audit[3039]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.579000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd21976b10 a2=0 a3=7ffd21976afc items=0 ppid=2926 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 18:43:20.581000 audit[3040]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.581000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff52389a50 a2=0 a3=7fff52389a3c items=0 ppid=2926 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 18:43:20.587000 audit[3042]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.587000 audit[3042]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5122a500 a2=0 a3=7ffd5122a4ec items=0 ppid=2926 pid=3042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 23 18:43:20.595000 audit[3045]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.595000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdf82356d0 a2=0 a3=7ffdf82356bc items=0 ppid=2926 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.595000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 18:43:20.598000 audit[3046]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.598000 audit[3046]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf14c44b0 a2=0 a3=7ffcf14c449c items=0 ppid=2926 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 18:43:20.604000 audit[3048]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.604000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf5838570 a2=0 a3=7ffcf583855c items=0 ppid=2926 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.604000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 18:43:20.607000 audit[3049]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.607000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec76692d0 a2=0 a3=7ffec76692bc items=0 ppid=2926 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 18:43:20.614000 audit[3051]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.614000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff06e79bd0 a2=0 a3=7fff06e79bbc items=0 ppid=2926 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.614000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 18:43:20.623000 audit[3054]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.623000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeefab8f60 a2=0 a3=7ffeefab8f4c items=0 ppid=2926 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 18:43:20.632000 audit[3057]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.632000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef9a1adb0 a2=0 a3=7ffef9a1ad9c items=0 ppid=2926 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.632000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 23 18:43:20.635000 audit[3058]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.635000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff85be1d30 a2=0 a3=7fff85be1d1c items=0 ppid=2926 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.635000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 18:43:20.640000 audit[3060]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.640000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc7ea907d0 a2=0 a3=7ffc7ea907bc items=0 ppid=2926 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:43:20.649000 audit[3063]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.649000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffceb205910 a2=0 a3=7ffceb2058fc items=0 ppid=2926 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 18:43:20.651000 audit[3064]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.651000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff72d7ca90 a2=0 a3=7fff72d7ca7c items=0 ppid=2926 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 18:43:20.665000 audit[3066]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.665000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc27d035a0 a2=0 a3=7ffc27d0358c items=0 ppid=2926 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 18:43:20.674000 audit[3067]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.674000 audit[3067]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd20e4ccf0 a2=0 a3=7ffd20e4ccdc items=0 ppid=2926 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 18:43:20.676882 kubelet[2814]: E0123 18:43:20.676803 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:20.684000 audit[3069]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.684000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd9eb468c0 a2=0 a3=7ffd9eb468ac items=0 ppid=2926 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:43:20.701000 audit[3073]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 18:43:20.701000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca7a55ee0 a2=0 a3=7ffca7a55ecc items=0 ppid=2926 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 18:43:20.709000 audit[3075]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:43:20.709000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe18f75720 a2=0 a3=7ffe18f7570c items=0 ppid=2926 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.709000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:20.710000 audit[3075]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 18:43:20.710000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe18f75720 a2=0 a3=7ffe18f7570c items=0 ppid=2926 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:20.710000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:20.790018 containerd[1598]: time="2026-01-23T18:43:20.789692361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tgwcg,Uid:a5dafc23-4e24-4905-a52b-c72055bbc49b,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:43:20.928912 containerd[1598]: time="2026-01-23T18:43:20.928364851Z" level=info msg="connecting to shim 2c72873796c41cbe493743800f31f143f8c70508b1b016b993262c1c53d32cf2" address="unix:///run/containerd/s/6be073e2b40b3943b1a08b8216460050109235d89b7d2129fb45154c789a9bd5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:20.976864 systemd[1]: Started cri-containerd-2c72873796c41cbe493743800f31f143f8c70508b1b016b993262c1c53d32cf2.scope - libcontainer container 2c72873796c41cbe493743800f31f143f8c70508b1b016b993262c1c53d32cf2. Jan 23 18:43:21.010000 audit: BPF prog-id=141 op=LOAD Jan 23 18:43:21.011000 audit: BPF prog-id=142 op=LOAD Jan 23 18:43:21.011000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3083 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:21.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373238373337393663343163626534393337343338303066333166 Jan 23 18:43:21.011000 audit: BPF prog-id=142 op=UNLOAD Jan 23 18:43:21.011000 audit[3095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:21.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373238373337393663343163626534393337343338303066333166 Jan 23 18:43:21.011000 audit: BPF prog-id=143 op=LOAD Jan 23 18:43:21.011000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3083 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:21.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373238373337393663343163626534393337343338303066333166 Jan 23 18:43:21.012000 audit: BPF prog-id=144 op=LOAD Jan 23 18:43:21.012000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3083 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:21.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373238373337393663343163626534393337343338303066333166 Jan 23 18:43:21.012000 audit: BPF prog-id=144 op=UNLOAD Jan 23 18:43:21.012000 audit[3095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:21.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373238373337393663343163626534393337343338303066333166 Jan 23 18:43:21.012000 audit: BPF prog-id=143 op=UNLOAD Jan 23 18:43:21.012000 audit[3095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:21.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373238373337393663343163626534393337343338303066333166 Jan 23 18:43:21.012000 audit: BPF prog-id=145 op=LOAD Jan 23 18:43:21.012000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3083 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:21.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373238373337393663343163626534393337343338303066333166 Jan 23 18:43:21.065831 containerd[1598]: time="2026-01-23T18:43:21.065765150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tgwcg,Uid:a5dafc23-4e24-4905-a52b-c72055bbc49b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c72873796c41cbe493743800f31f143f8c70508b1b016b993262c1c53d32cf2\"" Jan 23 18:43:21.067944 containerd[1598]: time="2026-01-23T18:43:21.067912479Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:43:21.191631 kubelet[2814]: E0123 18:43:21.185137 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:21.735611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106903194.mount: Deactivated successfully. Jan 23 18:43:22.192020 kubelet[2814]: E0123 18:43:22.188068 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:22.756877 containerd[1598]: time="2026-01-23T18:43:22.756805905Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:22.758694 containerd[1598]: time="2026-01-23T18:43:22.757853539Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 23 18:43:22.759440 containerd[1598]: time="2026-01-23T18:43:22.759339187Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:22.762147 containerd[1598]: time="2026-01-23T18:43:22.762105103Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:22.763023 containerd[1598]: time="2026-01-23T18:43:22.762991062Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.694945865s" Jan 23 18:43:22.763096 containerd[1598]: time="2026-01-23T18:43:22.763029749Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:43:22.766741 containerd[1598]: time="2026-01-23T18:43:22.766692391Z" level=info msg="CreateContainer within sandbox \"2c72873796c41cbe493743800f31f143f8c70508b1b016b993262c1c53d32cf2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:43:22.781673 containerd[1598]: time="2026-01-23T18:43:22.781390057Z" level=info msg="Container f2a184799c92d893dd4e585789e2d286b43882e48b0eb0fc4ffd43924bf17814: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:22.791313 containerd[1598]: time="2026-01-23T18:43:22.791120061Z" level=info msg="CreateContainer within sandbox \"2c72873796c41cbe493743800f31f143f8c70508b1b016b993262c1c53d32cf2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f2a184799c92d893dd4e585789e2d286b43882e48b0eb0fc4ffd43924bf17814\"" Jan 23 18:43:22.792434 containerd[1598]: time="2026-01-23T18:43:22.792365015Z" level=info msg="StartContainer for \"f2a184799c92d893dd4e585789e2d286b43882e48b0eb0fc4ffd43924bf17814\"" Jan 23 18:43:22.794146 containerd[1598]: time="2026-01-23T18:43:22.794091197Z" level=info msg="connecting to shim f2a184799c92d893dd4e585789e2d286b43882e48b0eb0fc4ffd43924bf17814" address="unix:///run/containerd/s/6be073e2b40b3943b1a08b8216460050109235d89b7d2129fb45154c789a9bd5" protocol=ttrpc version=3 Jan 23 18:43:22.825553 systemd[1]: Started cri-containerd-f2a184799c92d893dd4e585789e2d286b43882e48b0eb0fc4ffd43924bf17814.scope - libcontainer container f2a184799c92d893dd4e585789e2d286b43882e48b0eb0fc4ffd43924bf17814. Jan 23 18:43:22.848000 audit: BPF prog-id=146 op=LOAD Jan 23 18:43:22.849000 audit: BPF prog-id=147 op=LOAD Jan 23 18:43:22.849000 audit[3129]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3083 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:22.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613138343739396339326438393364643465353835373839653264 Jan 23 18:43:22.849000 audit: BPF prog-id=147 op=UNLOAD Jan 23 18:43:22.849000 audit[3129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:22.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613138343739396339326438393364643465353835373839653264 Jan 23 18:43:22.850000 audit: BPF prog-id=148 op=LOAD Jan 23 18:43:22.850000 audit[3129]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3083 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:22.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613138343739396339326438393364643465353835373839653264 Jan 23 18:43:22.850000 audit: BPF prog-id=149 op=LOAD Jan 23 18:43:22.850000 audit[3129]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3083 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:22.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613138343739396339326438393364643465353835373839653264 Jan 23 18:43:22.850000 audit: BPF prog-id=149 op=UNLOAD Jan 23 18:43:22.850000 audit[3129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:22.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613138343739396339326438393364643465353835373839653264 Jan 23 18:43:22.850000 audit: BPF prog-id=148 op=UNLOAD Jan 23 18:43:22.850000 audit[3129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:22.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613138343739396339326438393364643465353835373839653264 Jan 23 18:43:22.850000 audit: BPF prog-id=150 op=LOAD Jan 23 18:43:22.850000 audit[3129]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3083 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:22.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632613138343739396339326438393364643465353835373839653264 Jan 23 18:43:22.897643 containerd[1598]: time="2026-01-23T18:43:22.897541613Z" level=info msg="StartContainer for \"f2a184799c92d893dd4e585789e2d286b43882e48b0eb0fc4ffd43924bf17814\" returns successfully" Jan 23 18:43:23.909130 kubelet[2814]: E0123 18:43:23.909049 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:23.927221 kubelet[2814]: I0123 18:43:23.927095 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-tgwcg" podStartSLOduration=3.229770656 podStartE2EDuration="4.92707808s" podCreationTimestamp="2026-01-23 18:43:19 +0000 UTC" firstStartedPulling="2026-01-23 18:43:21.067329768 +0000 UTC m=+5.204898931" lastFinishedPulling="2026-01-23 18:43:22.764637192 +0000 UTC m=+6.902206355" observedRunningTime="2026-01-23 18:43:23.224592068 +0000 UTC m=+7.362161241" watchObservedRunningTime="2026-01-23 18:43:23.92707808 +0000 UTC m=+8.064647244" Jan 23 18:43:24.201703 kubelet[2814]: E0123 18:43:24.200067 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:30.234204 kernel: kauditd_printk_skb: 169 callbacks suppressed Jan 23 18:43:30.234538 kernel: audit: type=1106 audit(1769193810.219:515): pid=1822 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:43:30.219000 audit[1822]: USER_END pid=1822 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:43:30.220080 sudo[1822]: pam_unix(sudo:session): session closed for user root Jan 23 18:43:30.219000 audit[1822]: CRED_DISP pid=1822 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:43:30.247333 kernel: audit: type=1104 audit(1769193810.219:516): pid=1822 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 18:43:30.253297 sshd[1821]: Connection closed by 10.0.0.1 port 37750 Jan 23 18:43:30.255480 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jan 23 18:43:30.260000 audit[1817]: USER_END pid=1817 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:43:30.265257 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:43:30.266433 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:37750.service: Deactivated successfully. Jan 23 18:43:30.273561 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:43:30.276876 systemd[1]: session-8.scope: Consumed 8.459s CPU time, 220.1M memory peak. Jan 23 18:43:30.260000 audit[1817]: CRED_DISP pid=1817 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:43:30.287947 systemd-logind[1574]: Removed session 8. Jan 23 18:43:30.292524 kernel: audit: type=1106 audit(1769193810.260:517): pid=1817 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:43:30.292604 kernel: audit: type=1104 audit(1769193810.260:518): pid=1817 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:43:30.292628 kernel: audit: type=1131 audit(1769193810.265:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:37750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:30.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:37750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:43:30.921000 audit[3222]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:30.943179 kernel: audit: type=1325 audit(1769193810.921:520): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:30.943325 kernel: audit: type=1300 audit(1769193810.921:520): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff5a5c7ce0 a2=0 a3=7fff5a5c7ccc items=0 ppid=2926 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:30.921000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff5a5c7ce0 a2=0 a3=7fff5a5c7ccc items=0 ppid=2926 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:30.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:30.949325 kernel: audit: type=1327 audit(1769193810.921:520): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:30.948000 audit[3222]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:30.973800 kernel: audit: type=1325 audit(1769193810.948:521): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:30.974079 kernel: audit: type=1300 audit(1769193810.948:521): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5a5c7ce0 a2=0 a3=0 items=0 ppid=2926 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:30.948000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5a5c7ce0 a2=0 a3=0 items=0 ppid=2926 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:30.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:30.993000 audit[3224]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:30.993000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffda0453ed0 a2=0 a3=7ffda0453ebc items=0 ppid=2926 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:30.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:30.997000 audit[3224]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:30.997000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda0453ed0 a2=0 a3=0 items=0 ppid=2926 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:30.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:33.196000 audit[3227]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:33.196000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd032d5620 a2=0 a3=7ffd032d560c items=0 ppid=2926 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:33.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:33.204000 audit[3227]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:33.204000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd032d5620 a2=0 a3=0 items=0 ppid=2926 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:33.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:33.230000 audit[3229]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:33.230000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdc2aa5280 a2=0 a3=7ffdc2aa526c items=0 ppid=2926 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:33.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:33.235000 audit[3229]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:33.235000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc2aa5280 a2=0 a3=0 items=0 ppid=2926 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:33.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:34.268000 audit[3231]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:34.268000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc967cfbb0 a2=0 a3=7ffc967cfb9c items=0 ppid=2926 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:34.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:34.276000 audit[3231]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:34.276000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc967cfbb0 a2=0 a3=0 items=0 ppid=2926 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:34.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:35.444313 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 23 18:43:35.444427 kernel: audit: type=1325 audit(1769193815.435:530): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:35.435000 audit[3233]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:35.462347 kernel: audit: type=1300 audit(1769193815.435:530): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea08a5ae0 a2=0 a3=7ffea08a5acc items=0 ppid=2926 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.435000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffea08a5ae0 a2=0 a3=7ffea08a5acc items=0 ppid=2926 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:35.472586 kernel: audit: type=1327 audit(1769193815.435:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:35.466000 audit[3233]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:35.466000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea08a5ae0 a2=0 a3=0 items=0 ppid=2926 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.497680 kernel: audit: type=1325 audit(1769193815.466:531): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:35.498043 kernel: audit: type=1300 audit(1769193815.466:531): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea08a5ae0 a2=0 a3=0 items=0 ppid=2926 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:35.498573 systemd[1]: Created slice kubepods-besteffort-pod0d82ecbb_b0ea_4624_9a62_e6dae4660a37.slice - libcontainer container kubepods-besteffort-pod0d82ecbb_b0ea_4624_9a62_e6dae4660a37.slice. Jan 23 18:43:35.504599 kernel: audit: type=1327 audit(1769193815.466:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:35.601347 kubelet[2814]: I0123 18:43:35.601161 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d82ecbb-b0ea-4624-9a62-e6dae4660a37-typha-certs\") pod \"calico-typha-6d89fb6784-kk9n5\" (UID: \"0d82ecbb-b0ea-4624-9a62-e6dae4660a37\") " pod="calico-system/calico-typha-6d89fb6784-kk9n5" Jan 23 18:43:35.601347 kubelet[2814]: I0123 18:43:35.601319 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d82ecbb-b0ea-4624-9a62-e6dae4660a37-tigera-ca-bundle\") pod \"calico-typha-6d89fb6784-kk9n5\" (UID: \"0d82ecbb-b0ea-4624-9a62-e6dae4660a37\") " pod="calico-system/calico-typha-6d89fb6784-kk9n5" Jan 23 18:43:35.601347 kubelet[2814]: I0123 18:43:35.601346 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h69w\" (UniqueName: \"kubernetes.io/projected/0d82ecbb-b0ea-4624-9a62-e6dae4660a37-kube-api-access-2h69w\") pod \"calico-typha-6d89fb6784-kk9n5\" (UID: \"0d82ecbb-b0ea-4624-9a62-e6dae4660a37\") " pod="calico-system/calico-typha-6d89fb6784-kk9n5" Jan 23 18:43:35.694175 systemd[1]: Created slice kubepods-besteffort-podd794cc0d_03ef_4071_8651_efcea88d9901.slice - libcontainer container kubepods-besteffort-podd794cc0d_03ef_4071_8651_efcea88d9901.slice. Jan 23 18:43:35.703923 kubelet[2814]: I0123 18:43:35.703692 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-cni-bin-dir\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.703923 kubelet[2814]: I0123 18:43:35.703775 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-lib-modules\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.703923 kubelet[2814]: I0123 18:43:35.703801 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d794cc0d-03ef-4071-8651-efcea88d9901-node-certs\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.703923 kubelet[2814]: I0123 18:43:35.703815 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-var-lib-calico\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.703923 kubelet[2814]: I0123 18:43:35.703832 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-xtables-lock\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.704194 kubelet[2814]: I0123 18:43:35.703847 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-cni-net-dir\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.704194 kubelet[2814]: I0123 18:43:35.703861 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-flexvol-driver-host\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.704194 kubelet[2814]: I0123 18:43:35.703886 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-policysync\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.704194 kubelet[2814]: I0123 18:43:35.703910 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d794cc0d-03ef-4071-8651-efcea88d9901-tigera-ca-bundle\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.704194 kubelet[2814]: I0123 18:43:35.703929 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-cni-log-dir\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.704369 kubelet[2814]: I0123 18:43:35.703942 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d794cc0d-03ef-4071-8651-efcea88d9901-var-run-calico\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.704369 kubelet[2814]: I0123 18:43:35.703955 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49vk6\" (UniqueName: \"kubernetes.io/projected/d794cc0d-03ef-4071-8651-efcea88d9901-kube-api-access-49vk6\") pod \"calico-node-72rj2\" (UID: \"d794cc0d-03ef-4071-8651-efcea88d9901\") " pod="calico-system/calico-node-72rj2" Jan 23 18:43:35.807108 kubelet[2814]: E0123 18:43:35.806957 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:35.809502 containerd[1598]: time="2026-01-23T18:43:35.809404374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d89fb6784-kk9n5,Uid:0d82ecbb-b0ea-4624-9a62-e6dae4660a37,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:35.818250 kubelet[2814]: E0123 18:43:35.818173 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:35.818250 kubelet[2814]: W0123 18:43:35.818332 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:35.819040 kubelet[2814]: E0123 18:43:35.818677 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:35.822497 kubelet[2814]: E0123 18:43:35.822187 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:35.822497 kubelet[2814]: W0123 18:43:35.822237 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:35.822497 kubelet[2814]: E0123 18:43:35.822314 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:35.865448 containerd[1598]: time="2026-01-23T18:43:35.865348836Z" level=info msg="connecting to shim fed2cbf619ffd82994dd388e4b364256729b4f78a010ee5c1741c094b1a02e10" address="unix:///run/containerd/s/daeaf9bf7e60d6c0b623748a827d60bc97ff07ae63581f6a3ab72c5e0913e00e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:35.904040 kubelet[2814]: E0123 18:43:35.903959 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:43:35.931573 systemd[1]: Started cri-containerd-fed2cbf619ffd82994dd388e4b364256729b4f78a010ee5c1741c094b1a02e10.scope - libcontainer container fed2cbf619ffd82994dd388e4b364256729b4f78a010ee5c1741c094b1a02e10. Jan 23 18:43:35.968000 audit: BPF prog-id=151 op=LOAD Jan 23 18:43:35.983431 kernel: audit: type=1334 audit(1769193815.968:532): prog-id=151 op=LOAD Jan 23 18:43:35.980000 audit: BPF prog-id=152 op=LOAD Jan 23 18:43:35.980000 audit[3260]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3249 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.995472 kernel: audit: type=1334 audit(1769193815.980:533): prog-id=152 op=LOAD Jan 23 18:43:35.995585 kernel: audit: type=1300 audit(1769193815.980:533): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3249 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.995648 kernel: audit: type=1327 audit(1769193815.980:533): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643263626636313966666438323939346464333838653462333634 Jan 23 18:43:35.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643263626636313966666438323939346464333838653462333634 Jan 23 18:43:35.999093 kubelet[2814]: E0123 18:43:35.999012 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:36.001452 containerd[1598]: time="2026-01-23T18:43:36.000166567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-72rj2,Uid:d794cc0d-03ef-4071-8651-efcea88d9901,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:36.004937 kubelet[2814]: E0123 18:43:36.004867 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.005067 kubelet[2814]: W0123 18:43:36.005015 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.005067 kubelet[2814]: E0123 18:43:36.005038 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.005618 kubelet[2814]: E0123 18:43:36.005540 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.005618 kubelet[2814]: W0123 18:43:36.005557 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.005618 kubelet[2814]: E0123 18:43:36.005569 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.006655 kubelet[2814]: E0123 18:43:36.006589 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.006655 kubelet[2814]: W0123 18:43:36.006611 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.006655 kubelet[2814]: E0123 18:43:36.006625 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:35.980000 audit: BPF prog-id=152 op=UNLOAD Jan 23 18:43:35.980000 audit[3260]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643263626636313966666438323939346464333838653462333634 Jan 23 18:43:35.980000 audit: BPF prog-id=153 op=LOAD Jan 23 18:43:35.980000 audit[3260]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3249 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643263626636313966666438323939346464333838653462333634 Jan 23 18:43:35.981000 audit: BPF prog-id=154 op=LOAD Jan 23 18:43:35.981000 audit[3260]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3249 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643263626636313966666438323939346464333838653462333634 Jan 23 18:43:35.981000 audit: BPF prog-id=154 op=UNLOAD Jan 23 18:43:35.981000 audit[3260]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643263626636313966666438323939346464333838653462333634 Jan 23 18:43:35.981000 audit: BPF prog-id=153 op=UNLOAD Jan 23 18:43:35.981000 audit[3260]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643263626636313966666438323939346464333838653462333634 Jan 23 18:43:35.981000 audit: BPF prog-id=155 op=LOAD Jan 23 18:43:35.981000 audit[3260]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3249 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:35.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665643263626636313966666438323939346464333838653462333634 Jan 23 18:43:36.008146 kubelet[2814]: E0123 18:43:36.007191 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.008146 kubelet[2814]: W0123 18:43:36.007201 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.008146 kubelet[2814]: E0123 18:43:36.007212 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.008146 kubelet[2814]: E0123 18:43:36.007566 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.008146 kubelet[2814]: W0123 18:43:36.007576 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.008146 kubelet[2814]: E0123 18:43:36.007587 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.008752 kubelet[2814]: E0123 18:43:36.008557 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.008752 kubelet[2814]: W0123 18:43:36.008568 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.008752 kubelet[2814]: E0123 18:43:36.008578 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.009049 kubelet[2814]: E0123 18:43:36.009000 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.009049 kubelet[2814]: W0123 18:43:36.009028 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.009049 kubelet[2814]: E0123 18:43:36.009038 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.009793 kubelet[2814]: E0123 18:43:36.009743 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.009793 kubelet[2814]: W0123 18:43:36.009773 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.009793 kubelet[2814]: E0123 18:43:36.009783 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.010492 kubelet[2814]: E0123 18:43:36.010445 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.010492 kubelet[2814]: W0123 18:43:36.010476 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.010492 kubelet[2814]: E0123 18:43:36.010486 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.011000 kubelet[2814]: E0123 18:43:36.010950 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.011000 kubelet[2814]: W0123 18:43:36.010978 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.011000 kubelet[2814]: E0123 18:43:36.010988 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.011477 kubelet[2814]: E0123 18:43:36.011391 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.011477 kubelet[2814]: W0123 18:43:36.011417 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.011477 kubelet[2814]: E0123 18:43:36.011428 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.011826 kubelet[2814]: E0123 18:43:36.011805 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.012012 kubelet[2814]: W0123 18:43:36.011911 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.012012 kubelet[2814]: E0123 18:43:36.011925 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.012398 kubelet[2814]: E0123 18:43:36.012384 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.012489 kubelet[2814]: W0123 18:43:36.012475 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.012539 kubelet[2814]: E0123 18:43:36.012528 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.013096 kubelet[2814]: E0123 18:43:36.013028 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.013096 kubelet[2814]: W0123 18:43:36.013042 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.013096 kubelet[2814]: E0123 18:43:36.013052 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.013759 kubelet[2814]: E0123 18:43:36.013589 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.013759 kubelet[2814]: W0123 18:43:36.013617 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.013759 kubelet[2814]: E0123 18:43:36.013644 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.014207 kubelet[2814]: E0123 18:43:36.014193 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.014306 kubelet[2814]: W0123 18:43:36.014253 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.014358 kubelet[2814]: E0123 18:43:36.014347 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.014759 kubelet[2814]: E0123 18:43:36.014745 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.014877 kubelet[2814]: W0123 18:43:36.014818 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.014877 kubelet[2814]: E0123 18:43:36.014832 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.015220 kubelet[2814]: E0123 18:43:36.015158 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.015220 kubelet[2814]: W0123 18:43:36.015169 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.015427 kubelet[2814]: E0123 18:43:36.015178 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.016001 kubelet[2814]: E0123 18:43:36.015903 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.016001 kubelet[2814]: W0123 18:43:36.015916 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.016001 kubelet[2814]: E0123 18:43:36.015926 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.016794 kubelet[2814]: E0123 18:43:36.016780 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.016869 kubelet[2814]: W0123 18:43:36.016852 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.016958 kubelet[2814]: E0123 18:43:36.016927 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.018026 kubelet[2814]: E0123 18:43:36.017819 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.018026 kubelet[2814]: W0123 18:43:36.017832 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.018026 kubelet[2814]: E0123 18:43:36.017842 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.018026 kubelet[2814]: I0123 18:43:36.017873 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f72bd6e0-6290-4ad0-99d3-a580eaff8fda-kubelet-dir\") pod \"csi-node-driver-w2smd\" (UID: \"f72bd6e0-6290-4ad0-99d3-a580eaff8fda\") " pod="calico-system/csi-node-driver-w2smd" Jan 23 18:43:36.018423 kubelet[2814]: E0123 18:43:36.018408 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.018517 kubelet[2814]: W0123 18:43:36.018501 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.018659 kubelet[2814]: E0123 18:43:36.018600 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.018927 kubelet[2814]: I0123 18:43:36.018873 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f72bd6e0-6290-4ad0-99d3-a580eaff8fda-registration-dir\") pod \"csi-node-driver-w2smd\" (UID: \"f72bd6e0-6290-4ad0-99d3-a580eaff8fda\") " pod="calico-system/csi-node-driver-w2smd" Jan 23 18:43:36.019231 kubelet[2814]: E0123 18:43:36.019205 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.019231 kubelet[2814]: W0123 18:43:36.019216 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.019865 kubelet[2814]: E0123 18:43:36.019492 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.020355 kubelet[2814]: E0123 18:43:36.020325 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.020355 kubelet[2814]: W0123 18:43:36.020340 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.020520 kubelet[2814]: E0123 18:43:36.020458 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.020887 kubelet[2814]: E0123 18:43:36.020861 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.020887 kubelet[2814]: W0123 18:43:36.020873 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.021055 kubelet[2814]: E0123 18:43:36.020996 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.021166 kubelet[2814]: I0123 18:43:36.021018 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j2zb\" (UniqueName: \"kubernetes.io/projected/f72bd6e0-6290-4ad0-99d3-a580eaff8fda-kube-api-access-5j2zb\") pod \"csi-node-driver-w2smd\" (UID: \"f72bd6e0-6290-4ad0-99d3-a580eaff8fda\") " pod="calico-system/csi-node-driver-w2smd" Jan 23 18:43:36.021578 kubelet[2814]: E0123 18:43:36.021565 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.021649 kubelet[2814]: W0123 18:43:36.021636 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.021742 kubelet[2814]: E0123 18:43:36.021693 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.022330 kubelet[2814]: E0123 18:43:36.022222 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.022438 kubelet[2814]: W0123 18:43:36.022419 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.022539 kubelet[2814]: E0123 18:43:36.022516 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.022965 kubelet[2814]: E0123 18:43:36.022928 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.022965 kubelet[2814]: W0123 18:43:36.022940 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.022965 kubelet[2814]: E0123 18:43:36.022949 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.023521 kubelet[2814]: E0123 18:43:36.023486 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.023521 kubelet[2814]: W0123 18:43:36.023498 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.023521 kubelet[2814]: E0123 18:43:36.023507 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.024309 kubelet[2814]: E0123 18:43:36.024193 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.024309 kubelet[2814]: W0123 18:43:36.024206 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.024309 kubelet[2814]: E0123 18:43:36.024215 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.024518 kubelet[2814]: I0123 18:43:36.024232 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f72bd6e0-6290-4ad0-99d3-a580eaff8fda-socket-dir\") pod \"csi-node-driver-w2smd\" (UID: \"f72bd6e0-6290-4ad0-99d3-a580eaff8fda\") " pod="calico-system/csi-node-driver-w2smd" Jan 23 18:43:36.025038 kubelet[2814]: E0123 18:43:36.025009 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.025038 kubelet[2814]: W0123 18:43:36.025022 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.025197 kubelet[2814]: E0123 18:43:36.025138 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.025392 kubelet[2814]: I0123 18:43:36.025158 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f72bd6e0-6290-4ad0-99d3-a580eaff8fda-varrun\") pod \"csi-node-driver-w2smd\" (UID: \"f72bd6e0-6290-4ad0-99d3-a580eaff8fda\") " pod="calico-system/csi-node-driver-w2smd" Jan 23 18:43:36.025773 kubelet[2814]: E0123 18:43:36.025761 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.025855 kubelet[2814]: W0123 18:43:36.025815 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.026044 kubelet[2814]: E0123 18:43:36.025924 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.026316 kubelet[2814]: E0123 18:43:36.026188 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.026316 kubelet[2814]: W0123 18:43:36.026199 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.026316 kubelet[2814]: E0123 18:43:36.026221 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.026639 kubelet[2814]: E0123 18:43:36.026627 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.026768 kubelet[2814]: W0123 18:43:36.026682 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.026768 kubelet[2814]: E0123 18:43:36.026694 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.027154 kubelet[2814]: E0123 18:43:36.027112 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.027217 kubelet[2814]: W0123 18:43:36.027205 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.027340 kubelet[2814]: E0123 18:43:36.027249 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.038358 containerd[1598]: time="2026-01-23T18:43:36.037471543Z" level=info msg="connecting to shim 58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344" address="unix:///run/containerd/s/394109fcac406dfec01a8db7bff7765bbe394ed5f938983b901c4e6b0a607640" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:36.063544 containerd[1598]: time="2026-01-23T18:43:36.063480277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d89fb6784-kk9n5,Uid:0d82ecbb-b0ea-4624-9a62-e6dae4660a37,Namespace:calico-system,Attempt:0,} returns sandbox id \"fed2cbf619ffd82994dd388e4b364256729b4f78a010ee5c1741c094b1a02e10\"" Jan 23 18:43:36.065125 kubelet[2814]: E0123 18:43:36.065031 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:36.067385 containerd[1598]: time="2026-01-23T18:43:36.067068473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:43:36.085027 systemd[1]: Started cri-containerd-58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344.scope - libcontainer container 58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344. Jan 23 18:43:36.109000 audit: BPF prog-id=156 op=LOAD Jan 23 18:43:36.110000 audit: BPF prog-id=157 op=LOAD Jan 23 18:43:36.110000 audit[3346]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3334 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313833626133393033633164336534313832363735316632386165 Jan 23 18:43:36.110000 audit: BPF prog-id=157 op=UNLOAD Jan 23 18:43:36.110000 audit[3346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313833626133393033633164336534313832363735316632386165 Jan 23 18:43:36.111000 audit: BPF prog-id=158 op=LOAD Jan 23 18:43:36.111000 audit[3346]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3334 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313833626133393033633164336534313832363735316632386165 Jan 23 18:43:36.111000 audit: BPF prog-id=159 op=LOAD Jan 23 18:43:36.111000 audit[3346]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3334 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313833626133393033633164336534313832363735316632386165 Jan 23 18:43:36.111000 audit: BPF prog-id=159 op=UNLOAD Jan 23 18:43:36.111000 audit[3346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313833626133393033633164336534313832363735316632386165 Jan 23 18:43:36.111000 audit: BPF prog-id=158 op=UNLOAD Jan 23 18:43:36.111000 audit[3346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313833626133393033633164336534313832363735316632386165 Jan 23 18:43:36.111000 audit: BPF prog-id=160 op=LOAD Jan 23 18:43:36.111000 audit[3346]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3334 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313833626133393033633164336534313832363735316632386165 Jan 23 18:43:36.126377 kubelet[2814]: E0123 18:43:36.126158 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.126377 kubelet[2814]: W0123 18:43:36.126188 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.126377 kubelet[2814]: E0123 18:43:36.126215 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.126798 kubelet[2814]: E0123 18:43:36.126691 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.126798 kubelet[2814]: W0123 18:43:36.126746 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.126944 kubelet[2814]: E0123 18:43:36.126800 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.127132 kubelet[2814]: E0123 18:43:36.127116 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.127132 kubelet[2814]: W0123 18:43:36.127129 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.127217 kubelet[2814]: E0123 18:43:36.127168 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.128952 kubelet[2814]: E0123 18:43:36.128902 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.128952 kubelet[2814]: W0123 18:43:36.128942 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.129022 kubelet[2814]: E0123 18:43:36.128980 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.129782 kubelet[2814]: E0123 18:43:36.129552 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.129782 kubelet[2814]: W0123 18:43:36.129573 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.129782 kubelet[2814]: E0123 18:43:36.129623 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.130016 kubelet[2814]: E0123 18:43:36.129959 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.130016 kubelet[2814]: W0123 18:43:36.129996 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.130153 kubelet[2814]: E0123 18:43:36.130073 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.130520 kubelet[2814]: E0123 18:43:36.130465 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.130520 kubelet[2814]: W0123 18:43:36.130503 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.130752 kubelet[2814]: E0123 18:43:36.130693 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.130905 kubelet[2814]: E0123 18:43:36.130887 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.130905 kubelet[2814]: W0123 18:43:36.130899 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.130997 kubelet[2814]: E0123 18:43:36.130955 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.133319 kubelet[2814]: E0123 18:43:36.131469 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.133319 kubelet[2814]: W0123 18:43:36.131490 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.133319 kubelet[2814]: E0123 18:43:36.131510 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.133319 kubelet[2814]: E0123 18:43:36.132011 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.133319 kubelet[2814]: W0123 18:43:36.132024 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.133319 kubelet[2814]: E0123 18:43:36.132165 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.133319 kubelet[2814]: E0123 18:43:36.132588 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.133319 kubelet[2814]: W0123 18:43:36.132601 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.133319 kubelet[2814]: E0123 18:43:36.132676 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.133319 kubelet[2814]: E0123 18:43:36.132995 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.133573 kubelet[2814]: W0123 18:43:36.133009 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.133573 kubelet[2814]: E0123 18:43:36.133135 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.133573 kubelet[2814]: E0123 18:43:36.133490 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.133573 kubelet[2814]: W0123 18:43:36.133503 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.133649 kubelet[2814]: E0123 18:43:36.133628 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.133996 kubelet[2814]: E0123 18:43:36.133972 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.133996 kubelet[2814]: W0123 18:43:36.133992 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.134154 kubelet[2814]: E0123 18:43:36.134115 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.134645 kubelet[2814]: E0123 18:43:36.134567 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.134645 kubelet[2814]: W0123 18:43:36.134604 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.134835 kubelet[2814]: E0123 18:43:36.134776 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.135256 kubelet[2814]: E0123 18:43:36.135193 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.135256 kubelet[2814]: W0123 18:43:36.135228 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.135365 kubelet[2814]: E0123 18:43:36.135336 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.135847 kubelet[2814]: E0123 18:43:36.135788 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.135847 kubelet[2814]: W0123 18:43:36.135828 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.136224 kubelet[2814]: E0123 18:43:36.135950 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.136505 kubelet[2814]: E0123 18:43:36.136464 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.136820 kubelet[2814]: W0123 18:43:36.136557 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.136820 kubelet[2814]: E0123 18:43:36.136676 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.137217 kubelet[2814]: E0123 18:43:36.137169 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.137217 kubelet[2814]: W0123 18:43:36.137207 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.137395 kubelet[2814]: E0123 18:43:36.137362 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.137828 kubelet[2814]: E0123 18:43:36.137774 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.137828 kubelet[2814]: W0123 18:43:36.137815 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.137972 kubelet[2814]: E0123 18:43:36.137931 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.138493 kubelet[2814]: E0123 18:43:36.138321 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.138493 kubelet[2814]: W0123 18:43:36.138342 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.138610 kubelet[2814]: E0123 18:43:36.138479 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.138957 kubelet[2814]: E0123 18:43:36.138850 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.138957 kubelet[2814]: W0123 18:43:36.138888 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.139059 kubelet[2814]: E0123 18:43:36.139009 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.139754 kubelet[2814]: E0123 18:43:36.139689 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.139813 kubelet[2814]: W0123 18:43:36.139753 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.139930 kubelet[2814]: E0123 18:43:36.139897 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.140349 kubelet[2814]: E0123 18:43:36.140257 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.140349 kubelet[2814]: W0123 18:43:36.140348 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.140486 kubelet[2814]: E0123 18:43:36.140450 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.141016 kubelet[2814]: E0123 18:43:36.140953 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.141016 kubelet[2814]: W0123 18:43:36.140976 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.141016 kubelet[2814]: E0123 18:43:36.140990 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.142828 containerd[1598]: time="2026-01-23T18:43:36.142784445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-72rj2,Uid:d794cc0d-03ef-4071-8651-efcea88d9901,Namespace:calico-system,Attempt:0,} returns sandbox id \"58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344\"" Jan 23 18:43:36.145626 kubelet[2814]: E0123 18:43:36.144480 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:36.153147 kubelet[2814]: E0123 18:43:36.153105 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:36.153147 kubelet[2814]: W0123 18:43:36.153147 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:36.153255 kubelet[2814]: E0123 18:43:36.153166 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:36.522000 audit[3406]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:36.522000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdb678d840 a2=0 a3=7ffdb678d82c items=0 ppid=2926 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:36.537000 audit[3406]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:36.537000 audit[3406]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb678d840 a2=0 a3=0 items=0 ppid=2926 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:36.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:37.123757 kubelet[2814]: E0123 18:43:37.123639 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:43:37.370028 containerd[1598]: time="2026-01-23T18:43:37.369973266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:37.372223 containerd[1598]: time="2026-01-23T18:43:37.370887028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:37.373753 containerd[1598]: time="2026-01-23T18:43:37.373698373Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:37.377505 containerd[1598]: time="2026-01-23T18:43:37.377357997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:37.379440 containerd[1598]: time="2026-01-23T18:43:37.379389954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.31220594s" Jan 23 18:43:37.379440 containerd[1598]: time="2026-01-23T18:43:37.379436609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:43:37.383091 containerd[1598]: time="2026-01-23T18:43:37.383055272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:43:37.419815 containerd[1598]: time="2026-01-23T18:43:37.419605498Z" level=info msg="CreateContainer within sandbox \"fed2cbf619ffd82994dd388e4b364256729b4f78a010ee5c1741c094b1a02e10\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:43:37.436953 containerd[1598]: time="2026-01-23T18:43:37.436852115Z" level=info msg="Container 8cbe1c26b3dd79623537495083fb8e2e3bac69dbdf533aa7df0b5281e21c895e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:37.467926 containerd[1598]: time="2026-01-23T18:43:37.467809548Z" level=info msg="CreateContainer within sandbox \"fed2cbf619ffd82994dd388e4b364256729b4f78a010ee5c1741c094b1a02e10\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8cbe1c26b3dd79623537495083fb8e2e3bac69dbdf533aa7df0b5281e21c895e\"" Jan 23 18:43:37.468613 containerd[1598]: time="2026-01-23T18:43:37.468419732Z" level=info msg="StartContainer for \"8cbe1c26b3dd79623537495083fb8e2e3bac69dbdf533aa7df0b5281e21c895e\"" Jan 23 18:43:37.471088 containerd[1598]: time="2026-01-23T18:43:37.471032901Z" level=info msg="connecting to shim 8cbe1c26b3dd79623537495083fb8e2e3bac69dbdf533aa7df0b5281e21c895e" address="unix:///run/containerd/s/daeaf9bf7e60d6c0b623748a827d60bc97ff07ae63581f6a3ab72c5e0913e00e" protocol=ttrpc version=3 Jan 23 18:43:37.521463 systemd[1]: Started cri-containerd-8cbe1c26b3dd79623537495083fb8e2e3bac69dbdf533aa7df0b5281e21c895e.scope - libcontainer container 8cbe1c26b3dd79623537495083fb8e2e3bac69dbdf533aa7df0b5281e21c895e. Jan 23 18:43:37.560000 audit: BPF prog-id=161 op=LOAD Jan 23 18:43:37.560000 audit: BPF prog-id=162 op=LOAD Jan 23 18:43:37.560000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3249 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:37.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863626531633236623364643739363233353337343935303833666238 Jan 23 18:43:37.560000 audit: BPF prog-id=162 op=UNLOAD Jan 23 18:43:37.560000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:37.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863626531633236623364643739363233353337343935303833666238 Jan 23 18:43:37.561000 audit: BPF prog-id=163 op=LOAD Jan 23 18:43:37.561000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3249 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863626531633236623364643739363233353337343935303833666238 Jan 23 18:43:37.561000 audit: BPF prog-id=164 op=LOAD Jan 23 18:43:37.561000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3249 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863626531633236623364643739363233353337343935303833666238 Jan 23 18:43:37.561000 audit: BPF prog-id=164 op=UNLOAD Jan 23 18:43:37.561000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863626531633236623364643739363233353337343935303833666238 Jan 23 18:43:37.561000 audit: BPF prog-id=163 op=UNLOAD Jan 23 18:43:37.561000 audit[3417]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3249 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:37.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863626531633236623364643739363233353337343935303833666238 Jan 23 18:43:37.562000 audit: BPF prog-id=165 op=LOAD Jan 23 18:43:37.562000 audit[3417]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3249 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:37.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863626531633236623364643739363233353337343935303833666238 Jan 23 18:43:37.630347 containerd[1598]: time="2026-01-23T18:43:37.629223058Z" level=info msg="StartContainer for \"8cbe1c26b3dd79623537495083fb8e2e3bac69dbdf533aa7df0b5281e21c895e\" returns successfully" Jan 23 18:43:38.282841 kubelet[2814]: E0123 18:43:38.282793 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:38.345087 kubelet[2814]: E0123 18:43:38.344945 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.345087 kubelet[2814]: W0123 18:43:38.345075 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.345087 kubelet[2814]: E0123 18:43:38.345236 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.347776 kubelet[2814]: E0123 18:43:38.345832 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.347776 kubelet[2814]: W0123 18:43:38.345844 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.347776 kubelet[2814]: E0123 18:43:38.345854 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.347776 kubelet[2814]: E0123 18:43:38.346303 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.347776 kubelet[2814]: W0123 18:43:38.346315 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.347776 kubelet[2814]: E0123 18:43:38.346325 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.347776 kubelet[2814]: E0123 18:43:38.347120 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.347776 kubelet[2814]: W0123 18:43:38.347135 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.347776 kubelet[2814]: E0123 18:43:38.347149 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.348108 kubelet[2814]: E0123 18:43:38.347878 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.348108 kubelet[2814]: W0123 18:43:38.347892 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.348108 kubelet[2814]: E0123 18:43:38.347908 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.348325 kubelet[2814]: E0123 18:43:38.348239 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.348424 kubelet[2814]: W0123 18:43:38.348375 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.348424 kubelet[2814]: E0123 18:43:38.348417 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.348830 kubelet[2814]: E0123 18:43:38.348749 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.348830 kubelet[2814]: W0123 18:43:38.348793 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.348830 kubelet[2814]: E0123 18:43:38.348809 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.349267 kubelet[2814]: E0123 18:43:38.349223 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.349355 kubelet[2814]: W0123 18:43:38.349321 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.349355 kubelet[2814]: E0123 18:43:38.349341 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.350034 kubelet[2814]: E0123 18:43:38.349849 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.350034 kubelet[2814]: W0123 18:43:38.349863 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.350034 kubelet[2814]: E0123 18:43:38.349878 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.350347 kubelet[2814]: E0123 18:43:38.350210 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.350347 kubelet[2814]: W0123 18:43:38.350243 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.350439 kubelet[2814]: E0123 18:43:38.350257 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.350885 kubelet[2814]: E0123 18:43:38.350812 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.350885 kubelet[2814]: W0123 18:43:38.350833 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.350885 kubelet[2814]: E0123 18:43:38.350845 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.351184 kubelet[2814]: E0123 18:43:38.351138 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.351184 kubelet[2814]: W0123 18:43:38.351152 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.351184 kubelet[2814]: E0123 18:43:38.351163 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.351851 kubelet[2814]: E0123 18:43:38.351811 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.351851 kubelet[2814]: W0123 18:43:38.351838 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.351851 kubelet[2814]: E0123 18:43:38.351851 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.352359 kubelet[2814]: E0123 18:43:38.352182 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.352359 kubelet[2814]: W0123 18:43:38.352335 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.352359 kubelet[2814]: E0123 18:43:38.352350 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.352917 kubelet[2814]: E0123 18:43:38.352881 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.352917 kubelet[2814]: W0123 18:43:38.352900 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.352917 kubelet[2814]: E0123 18:43:38.352915 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.358971 kubelet[2814]: E0123 18:43:38.358859 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.358971 kubelet[2814]: W0123 18:43:38.358890 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.358971 kubelet[2814]: E0123 18:43:38.358907 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.359436 kubelet[2814]: E0123 18:43:38.359398 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.359436 kubelet[2814]: W0123 18:43:38.359425 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.359535 kubelet[2814]: E0123 18:43:38.359457 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.360039 kubelet[2814]: E0123 18:43:38.359943 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.360039 kubelet[2814]: W0123 18:43:38.359978 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.360039 kubelet[2814]: E0123 18:43:38.360011 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.361059 kubelet[2814]: E0123 18:43:38.360555 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.361059 kubelet[2814]: W0123 18:43:38.360578 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.361059 kubelet[2814]: E0123 18:43:38.360662 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.361059 kubelet[2814]: E0123 18:43:38.360993 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.361059 kubelet[2814]: W0123 18:43:38.361007 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.361657 kubelet[2814]: E0123 18:43:38.361513 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.361738 kubelet[2814]: E0123 18:43:38.361720 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.361738 kubelet[2814]: W0123 18:43:38.361734 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.361973 kubelet[2814]: E0123 18:43:38.361917 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.362208 kubelet[2814]: E0123 18:43:38.362144 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.362208 kubelet[2814]: W0123 18:43:38.362178 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.362370 kubelet[2814]: E0123 18:43:38.362241 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.362590 kubelet[2814]: E0123 18:43:38.362551 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.362590 kubelet[2814]: W0123 18:43:38.362587 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.362739 kubelet[2814]: E0123 18:43:38.362646 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.363167 kubelet[2814]: E0123 18:43:38.363107 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.363167 kubelet[2814]: W0123 18:43:38.363141 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.363319 kubelet[2814]: E0123 18:43:38.363172 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.363591 kubelet[2814]: E0123 18:43:38.363540 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.363591 kubelet[2814]: W0123 18:43:38.363567 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.363591 kubelet[2814]: E0123 18:43:38.363590 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.364070 kubelet[2814]: E0123 18:43:38.364019 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.364070 kubelet[2814]: W0123 18:43:38.364049 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.364161 kubelet[2814]: E0123 18:43:38.364119 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.364463 kubelet[2814]: E0123 18:43:38.364414 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.364463 kubelet[2814]: W0123 18:43:38.364438 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.364695 kubelet[2814]: E0123 18:43:38.364482 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.365094 kubelet[2814]: E0123 18:43:38.365056 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.365094 kubelet[2814]: W0123 18:43:38.365088 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.365172 kubelet[2814]: E0123 18:43:38.365103 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.365472 kubelet[2814]: E0123 18:43:38.365434 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.365472 kubelet[2814]: W0123 18:43:38.365465 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.365563 kubelet[2814]: E0123 18:43:38.365498 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.365955 kubelet[2814]: E0123 18:43:38.365919 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.365955 kubelet[2814]: W0123 18:43:38.365947 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.366101 kubelet[2814]: E0123 18:43:38.365983 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.366573 kubelet[2814]: E0123 18:43:38.366537 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.366573 kubelet[2814]: W0123 18:43:38.366566 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.366700 kubelet[2814]: E0123 18:43:38.366598 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.367101 kubelet[2814]: E0123 18:43:38.367071 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.367101 kubelet[2814]: W0123 18:43:38.367091 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.367195 kubelet[2814]: E0123 18:43:38.367121 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.367849 kubelet[2814]: E0123 18:43:38.367513 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:43:38.367899 kubelet[2814]: W0123 18:43:38.367853 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:43:38.367899 kubelet[2814]: E0123 18:43:38.367867 2814 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:43:38.656317 containerd[1598]: time="2026-01-23T18:43:38.655414425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:38.656894 containerd[1598]: time="2026-01-23T18:43:38.656480174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:38.658102 containerd[1598]: time="2026-01-23T18:43:38.658018272Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:38.660977 containerd[1598]: time="2026-01-23T18:43:38.660914018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:38.661613 containerd[1598]: time="2026-01-23T18:43:38.661553297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.278100961s" Jan 23 18:43:38.661654 containerd[1598]: time="2026-01-23T18:43:38.661624738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:43:38.664105 containerd[1598]: time="2026-01-23T18:43:38.663999953Z" level=info msg="CreateContainer within sandbox \"58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:43:38.686445 containerd[1598]: time="2026-01-23T18:43:38.686371445Z" level=info msg="Container e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:38.696549 containerd[1598]: time="2026-01-23T18:43:38.696458509Z" level=info msg="CreateContainer within sandbox \"58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa\"" Jan 23 18:43:38.698312 containerd[1598]: time="2026-01-23T18:43:38.697547493Z" level=info msg="StartContainer for \"e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa\"" Jan 23 18:43:38.699341 containerd[1598]: time="2026-01-23T18:43:38.699215781Z" level=info msg="connecting to shim e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa" address="unix:///run/containerd/s/394109fcac406dfec01a8db7bff7765bbe394ed5f938983b901c4e6b0a607640" protocol=ttrpc version=3 Jan 23 18:43:38.751655 systemd[1]: Started cri-containerd-e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa.scope - libcontainer container e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa. Jan 23 18:43:38.848000 audit: BPF prog-id=166 op=LOAD Jan 23 18:43:38.848000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3334 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:38.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323038393261366562666632353131356463623136613934306133 Jan 23 18:43:38.848000 audit: BPF prog-id=167 op=LOAD Jan 23 18:43:38.848000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3334 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:38.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323038393261366562666632353131356463623136613934306133 Jan 23 18:43:38.848000 audit: BPF prog-id=167 op=UNLOAD Jan 23 18:43:38.848000 audit[3491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:38.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323038393261366562666632353131356463623136613934306133 Jan 23 18:43:38.848000 audit: BPF prog-id=166 op=UNLOAD Jan 23 18:43:38.848000 audit[3491]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:38.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323038393261366562666632353131356463623136613934306133 Jan 23 18:43:38.849000 audit: BPF prog-id=168 op=LOAD Jan 23 18:43:38.849000 audit[3491]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3334 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:38.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536323038393261366562666632353131356463623136613934306133 Jan 23 18:43:38.885839 kubelet[2814]: I0123 18:43:38.878843 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d89fb6784-kk9n5" podStartSLOduration=2.56248534 podStartE2EDuration="3.878823852s" podCreationTimestamp="2026-01-23 18:43:35 +0000 UTC" firstStartedPulling="2026-01-23 18:43:36.066162189 +0000 UTC m=+20.203731352" lastFinishedPulling="2026-01-23 18:43:37.382500702 +0000 UTC m=+21.520069864" observedRunningTime="2026-01-23 18:43:38.297661306 +0000 UTC m=+22.435230470" watchObservedRunningTime="2026-01-23 18:43:38.878823852 +0000 UTC m=+23.016393035" Jan 23 18:43:38.905000 audit[3516]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:38.905000 audit[3516]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc724e8ab0 a2=0 a3=7ffc724e8a9c items=0 ppid=2926 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:38.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:38.912000 audit[3516]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:38.912000 audit[3516]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc724e8ab0 a2=0 a3=7ffc724e8a9c items=0 ppid=2926 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:38.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:38.962713 containerd[1598]: time="2026-01-23T18:43:38.962644491Z" level=info msg="StartContainer for \"e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa\" returns successfully" Jan 23 18:43:38.981927 systemd[1]: cri-containerd-e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa.scope: Deactivated successfully. Jan 23 18:43:38.984000 audit: BPF prog-id=168 op=UNLOAD Jan 23 18:43:38.986861 containerd[1598]: time="2026-01-23T18:43:38.986773634Z" level=info msg="received container exit event container_id:\"e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa\" id:\"e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa\" pid:3504 exited_at:{seconds:1769193818 nanos:986022230}" Jan 23 18:43:39.030317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e620892a6ebff25115dcb16a940a3f31db4053303969b74072b94e82e1841baa-rootfs.mount: Deactivated successfully. Jan 23 18:43:39.124191 kubelet[2814]: E0123 18:43:39.124040 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:43:39.295915 kubelet[2814]: E0123 18:43:39.295093 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:39.295915 kubelet[2814]: E0123 18:43:39.295475 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:39.297952 containerd[1598]: time="2026-01-23T18:43:39.297015147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:43:40.296844 kubelet[2814]: E0123 18:43:40.296755 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:41.126946 kubelet[2814]: E0123 18:43:41.126702 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:43:41.163667 containerd[1598]: time="2026-01-23T18:43:41.163555205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:41.165247 containerd[1598]: time="2026-01-23T18:43:41.165115700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 23 18:43:41.166744 containerd[1598]: time="2026-01-23T18:43:41.166684204Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:41.170081 containerd[1598]: time="2026-01-23T18:43:41.170036999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:41.170748 containerd[1598]: time="2026-01-23T18:43:41.170675869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.873627561s" Jan 23 18:43:41.170748 containerd[1598]: time="2026-01-23T18:43:41.170717965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:43:41.174071 containerd[1598]: time="2026-01-23T18:43:41.174026132Z" level=info msg="CreateContainer within sandbox \"58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:43:41.188773 containerd[1598]: time="2026-01-23T18:43:41.187488543Z" level=info msg="Container a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:41.201988 containerd[1598]: time="2026-01-23T18:43:41.201870055Z" level=info msg="CreateContainer within sandbox \"58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c\"" Jan 23 18:43:41.202942 containerd[1598]: time="2026-01-23T18:43:41.202884100Z" level=info msg="StartContainer for \"a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c\"" Jan 23 18:43:41.207832 containerd[1598]: time="2026-01-23T18:43:41.207790403Z" level=info msg="connecting to shim a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c" address="unix:///run/containerd/s/394109fcac406dfec01a8db7bff7765bbe394ed5f938983b901c4e6b0a607640" protocol=ttrpc version=3 Jan 23 18:43:41.243636 systemd[1]: Started cri-containerd-a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c.scope - libcontainer container a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c. Jan 23 18:43:41.334000 audit: BPF prog-id=169 op=LOAD Jan 23 18:43:41.338145 kernel: kauditd_printk_skb: 90 callbacks suppressed Jan 23 18:43:41.338375 kernel: audit: type=1334 audit(1769193821.334:566): prog-id=169 op=LOAD Jan 23 18:43:41.350725 kernel: audit: type=1300 audit(1769193821.334:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3334 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:41.334000 audit[3555]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3334 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:41.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134636233316462623636306565336262666663623262366438326138 Jan 23 18:43:41.334000 audit: BPF prog-id=170 op=LOAD Jan 23 18:43:41.362918 kernel: audit: type=1327 audit(1769193821.334:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134636233316462623636306565336262666663623262366438326138 Jan 23 18:43:41.362991 kernel: audit: type=1334 audit(1769193821.334:567): prog-id=170 op=LOAD Jan 23 18:43:41.334000 audit[3555]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3334 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:41.373529 kernel: audit: type=1300 audit(1769193821.334:567): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3334 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:41.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134636233316462623636306565336262666663623262366438326138 Jan 23 18:43:41.382561 kernel: audit: type=1327 audit(1769193821.334:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134636233316462623636306565336262666663623262366438326138 Jan 23 18:43:41.382697 kernel: audit: type=1334 audit(1769193821.335:568): prog-id=170 op=UNLOAD Jan 23 18:43:41.335000 audit: BPF prog-id=170 op=UNLOAD Jan 23 18:43:41.335000 audit[3555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:41.397108 kernel: audit: type=1300 audit(1769193821.335:568): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:41.397212 kernel: audit: type=1327 audit(1769193821.335:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134636233316462623636306565336262666663623262366438326138 Jan 23 18:43:41.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134636233316462623636306565336262666663623262366438326138 Jan 23 18:43:41.403021 containerd[1598]: time="2026-01-23T18:43:41.402927629Z" level=info msg="StartContainer for \"a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c\" returns successfully" Jan 23 18:43:41.335000 audit: BPF prog-id=169 op=UNLOAD Jan 23 18:43:41.408603 kernel: audit: type=1334 audit(1769193821.335:569): prog-id=169 op=UNLOAD Jan 23 18:43:41.335000 audit[3555]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:41.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134636233316462623636306565336262666663623262366438326138 Jan 23 18:43:41.335000 audit: BPF prog-id=171 op=LOAD Jan 23 18:43:41.335000 audit[3555]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3334 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:41.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134636233316462623636306565336262666663623262366438326138 Jan 23 18:43:42.103950 systemd[1]: cri-containerd-a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c.scope: Deactivated successfully. Jan 23 18:43:42.104570 systemd[1]: cri-containerd-a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c.scope: Consumed 770ms CPU time, 182M memory peak, 4.1M read from disk, 171.3M written to disk. Jan 23 18:43:42.107322 containerd[1598]: time="2026-01-23T18:43:42.107205417Z" level=info msg="received container exit event container_id:\"a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c\" id:\"a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c\" pid:3567 exited_at:{seconds:1769193822 nanos:106518471}" Jan 23 18:43:42.109000 audit: BPF prog-id=171 op=UNLOAD Jan 23 18:43:42.209893 kubelet[2814]: I0123 18:43:42.209840 2814 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:43:42.227418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4cb31dbb660ee3bbffcb2b6d82a899624108557466c5d359a012eec58d0d98c-rootfs.mount: Deactivated successfully. Jan 23 18:43:42.276040 systemd[1]: Created slice kubepods-besteffort-podc6f4bf65_2b8c_4712_a434_da7d69d938c0.slice - libcontainer container kubepods-besteffort-podc6f4bf65_2b8c_4712_a434_da7d69d938c0.slice. Jan 23 18:43:42.294985 systemd[1]: Created slice kubepods-besteffort-poda2fec986_96f7_4105_9373_012c1fac3001.slice - libcontainer container kubepods-besteffort-poda2fec986_96f7_4105_9373_012c1fac3001.slice. Jan 23 18:43:42.307329 kubelet[2814]: I0123 18:43:42.305543 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp2lc\" (UniqueName: \"kubernetes.io/projected/72e54e47-91e4-415c-876e-aa36180ac3b1-kube-api-access-zp2lc\") pod \"goldmane-666569f655-276fc\" (UID: \"72e54e47-91e4-415c-876e-aa36180ac3b1\") " pod="calico-system/goldmane-666569f655-276fc" Jan 23 18:43:42.307329 kubelet[2814]: I0123 18:43:42.305612 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac222387-3b7e-4f68-972a-ec412c252e8d-config-volume\") pod \"coredns-668d6bf9bc-p5dcz\" (UID: \"ac222387-3b7e-4f68-972a-ec412c252e8d\") " pod="kube-system/coredns-668d6bf9bc-p5dcz" Jan 23 18:43:42.307329 kubelet[2814]: I0123 18:43:42.305646 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42rl\" (UniqueName: \"kubernetes.io/projected/2647b35f-a248-488d-8f41-2052dd32f727-kube-api-access-g42rl\") pod \"calico-apiserver-56878495cb-jls4r\" (UID: \"2647b35f-a248-488d-8f41-2052dd32f727\") " pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" Jan 23 18:43:42.307329 kubelet[2814]: I0123 18:43:42.305675 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/50725488-4a1d-4f65-a7da-a4a923730733-calico-apiserver-certs\") pod \"calico-apiserver-56878495cb-t9bs5\" (UID: \"50725488-4a1d-4f65-a7da-a4a923730733\") " pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" Jan 23 18:43:42.307329 kubelet[2814]: I0123 18:43:42.305700 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282xs\" (UniqueName: \"kubernetes.io/projected/ac222387-3b7e-4f68-972a-ec412c252e8d-kube-api-access-282xs\") pod \"coredns-668d6bf9bc-p5dcz\" (UID: \"ac222387-3b7e-4f68-972a-ec412c252e8d\") " pod="kube-system/coredns-668d6bf9bc-p5dcz" Jan 23 18:43:42.307065 systemd[1]: Created slice kubepods-burstable-podac222387_3b7e_4f68_972a_ec412c252e8d.slice - libcontainer container kubepods-burstable-podac222387_3b7e_4f68_972a_ec412c252e8d.slice. Jan 23 18:43:42.307746 kubelet[2814]: I0123 18:43:42.305726 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2647b35f-a248-488d-8f41-2052dd32f727-calico-apiserver-certs\") pod \"calico-apiserver-56878495cb-jls4r\" (UID: \"2647b35f-a248-488d-8f41-2052dd32f727\") " pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" Jan 23 18:43:42.307746 kubelet[2814]: I0123 18:43:42.305750 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2fec986-96f7-4105-9373-012c1fac3001-whisker-ca-bundle\") pod \"whisker-6bb854b445-fcpcc\" (UID: \"a2fec986-96f7-4105-9373-012c1fac3001\") " pod="calico-system/whisker-6bb854b445-fcpcc" Jan 23 18:43:42.307746 kubelet[2814]: I0123 18:43:42.305774 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dlj\" (UniqueName: \"kubernetes.io/projected/c6f4bf65-2b8c-4712-a434-da7d69d938c0-kube-api-access-r2dlj\") pod \"calico-kube-controllers-5bdcd99c5b-6vx2x\" (UID: \"c6f4bf65-2b8c-4712-a434-da7d69d938c0\") " pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" Jan 23 18:43:42.307746 kubelet[2814]: I0123 18:43:42.305797 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2fec986-96f7-4105-9373-012c1fac3001-whisker-backend-key-pair\") pod \"whisker-6bb854b445-fcpcc\" (UID: \"a2fec986-96f7-4105-9373-012c1fac3001\") " pod="calico-system/whisker-6bb854b445-fcpcc" Jan 23 18:43:42.307746 kubelet[2814]: I0123 18:43:42.305823 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2d8s\" (UniqueName: \"kubernetes.io/projected/33b60d66-70dc-47d9-aa85-505e7fd31a2d-kube-api-access-v2d8s\") pod \"coredns-668d6bf9bc-9vpfw\" (UID: \"33b60d66-70dc-47d9-aa85-505e7fd31a2d\") " pod="kube-system/coredns-668d6bf9bc-9vpfw" Jan 23 18:43:42.307985 kubelet[2814]: I0123 18:43:42.305885 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8l8s\" (UniqueName: \"kubernetes.io/projected/50725488-4a1d-4f65-a7da-a4a923730733-kube-api-access-x8l8s\") pod \"calico-apiserver-56878495cb-t9bs5\" (UID: \"50725488-4a1d-4f65-a7da-a4a923730733\") " pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" Jan 23 18:43:42.307985 kubelet[2814]: I0123 18:43:42.305909 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4067c734-cff1-4419-879a-3fc371d855f2-calico-apiserver-certs\") pod \"calico-apiserver-6bd45f567-rc4xx\" (UID: \"4067c734-cff1-4419-879a-3fc371d855f2\") " pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" Jan 23 18:43:42.307985 kubelet[2814]: I0123 18:43:42.305990 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72e54e47-91e4-415c-876e-aa36180ac3b1-config\") pod \"goldmane-666569f655-276fc\" (UID: \"72e54e47-91e4-415c-876e-aa36180ac3b1\") " pod="calico-system/goldmane-666569f655-276fc" Jan 23 18:43:42.307985 kubelet[2814]: I0123 18:43:42.306018 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6f4bf65-2b8c-4712-a434-da7d69d938c0-tigera-ca-bundle\") pod \"calico-kube-controllers-5bdcd99c5b-6vx2x\" (UID: \"c6f4bf65-2b8c-4712-a434-da7d69d938c0\") " pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" Jan 23 18:43:42.307985 kubelet[2814]: I0123 18:43:42.306042 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjcc4\" (UniqueName: \"kubernetes.io/projected/4067c734-cff1-4419-879a-3fc371d855f2-kube-api-access-bjcc4\") pod \"calico-apiserver-6bd45f567-rc4xx\" (UID: \"4067c734-cff1-4419-879a-3fc371d855f2\") " pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" Jan 23 18:43:42.308195 kubelet[2814]: I0123 18:43:42.306125 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33b60d66-70dc-47d9-aa85-505e7fd31a2d-config-volume\") pod \"coredns-668d6bf9bc-9vpfw\" (UID: \"33b60d66-70dc-47d9-aa85-505e7fd31a2d\") " pod="kube-system/coredns-668d6bf9bc-9vpfw" Jan 23 18:43:42.308195 kubelet[2814]: I0123 18:43:42.306149 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72e54e47-91e4-415c-876e-aa36180ac3b1-goldmane-ca-bundle\") pod \"goldmane-666569f655-276fc\" (UID: \"72e54e47-91e4-415c-876e-aa36180ac3b1\") " pod="calico-system/goldmane-666569f655-276fc" Jan 23 18:43:42.308195 kubelet[2814]: I0123 18:43:42.306175 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsj27\" (UniqueName: \"kubernetes.io/projected/a2fec986-96f7-4105-9373-012c1fac3001-kube-api-access-dsj27\") pod \"whisker-6bb854b445-fcpcc\" (UID: \"a2fec986-96f7-4105-9373-012c1fac3001\") " pod="calico-system/whisker-6bb854b445-fcpcc" Jan 23 18:43:42.308195 kubelet[2814]: I0123 18:43:42.306240 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/72e54e47-91e4-415c-876e-aa36180ac3b1-goldmane-key-pair\") pod \"goldmane-666569f655-276fc\" (UID: \"72e54e47-91e4-415c-876e-aa36180ac3b1\") " pod="calico-system/goldmane-666569f655-276fc" Jan 23 18:43:42.321321 kubelet[2814]: E0123 18:43:42.318792 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:42.326038 systemd[1]: Created slice kubepods-burstable-pod33b60d66_70dc_47d9_aa85_505e7fd31a2d.slice - libcontainer container kubepods-burstable-pod33b60d66_70dc_47d9_aa85_505e7fd31a2d.slice. Jan 23 18:43:42.342628 systemd[1]: Created slice kubepods-besteffort-pod50725488_4a1d_4f65_a7da_a4a923730733.slice - libcontainer container kubepods-besteffort-pod50725488_4a1d_4f65_a7da_a4a923730733.slice. Jan 23 18:43:42.351374 systemd[1]: Created slice kubepods-besteffort-pod72e54e47_91e4_415c_876e_aa36180ac3b1.slice - libcontainer container kubepods-besteffort-pod72e54e47_91e4_415c_876e_aa36180ac3b1.slice. Jan 23 18:43:42.359351 systemd[1]: Created slice kubepods-besteffort-pod2647b35f_a248_488d_8f41_2052dd32f727.slice - libcontainer container kubepods-besteffort-pod2647b35f_a248_488d_8f41_2052dd32f727.slice. Jan 23 18:43:42.365854 systemd[1]: Created slice kubepods-besteffort-pod4067c734_cff1_4419_879a_3fc371d855f2.slice - libcontainer container kubepods-besteffort-pod4067c734_cff1_4419_879a_3fc371d855f2.slice. Jan 23 18:43:42.601861 containerd[1598]: time="2026-01-23T18:43:42.601794228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb854b445-fcpcc,Uid:a2fec986-96f7-4105-9373-012c1fac3001,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:42.623101 kubelet[2814]: E0123 18:43:42.622369 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:42.628303 containerd[1598]: time="2026-01-23T18:43:42.628185206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5dcz,Uid:ac222387-3b7e-4f68-972a-ec412c252e8d,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:42.637881 kubelet[2814]: E0123 18:43:42.637788 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:42.639319 containerd[1598]: time="2026-01-23T18:43:42.639226230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9vpfw,Uid:33b60d66-70dc-47d9-aa85-505e7fd31a2d,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:42.648762 containerd[1598]: time="2026-01-23T18:43:42.648708034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56878495cb-t9bs5,Uid:50725488-4a1d-4f65-a7da-a4a923730733,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:43:42.660118 containerd[1598]: time="2026-01-23T18:43:42.659972671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-276fc,Uid:72e54e47-91e4-415c-876e-aa36180ac3b1,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:42.666197 containerd[1598]: time="2026-01-23T18:43:42.666167638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56878495cb-jls4r,Uid:2647b35f-a248-488d-8f41-2052dd32f727,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:43:42.688646 containerd[1598]: time="2026-01-23T18:43:42.688542293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bd45f567-rc4xx,Uid:4067c734-cff1-4419-879a-3fc371d855f2,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:43:42.792113 containerd[1598]: time="2026-01-23T18:43:42.791950492Z" level=error msg="Failed to destroy network for sandbox \"e834cddb9116c338a2c5f5f8f868e68312179e2a8e3b8d7429334358d4bc5def\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.800028 containerd[1598]: time="2026-01-23T18:43:42.799946641Z" level=error msg="Failed to destroy network for sandbox \"e6eb3f5c7163cc8bfed328953709a2c4834309cb816394af1afba7512082c9d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.802315 containerd[1598]: time="2026-01-23T18:43:42.801432266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9vpfw,Uid:33b60d66-70dc-47d9-aa85-505e7fd31a2d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e834cddb9116c338a2c5f5f8f868e68312179e2a8e3b8d7429334358d4bc5def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.802430 kubelet[2814]: E0123 18:43:42.801919 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e834cddb9116c338a2c5f5f8f868e68312179e2a8e3b8d7429334358d4bc5def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.802430 kubelet[2814]: E0123 18:43:42.801997 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e834cddb9116c338a2c5f5f8f868e68312179e2a8e3b8d7429334358d4bc5def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9vpfw" Jan 23 18:43:42.802430 kubelet[2814]: E0123 18:43:42.802047 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e834cddb9116c338a2c5f5f8f868e68312179e2a8e3b8d7429334358d4bc5def\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9vpfw" Jan 23 18:43:42.802568 kubelet[2814]: E0123 18:43:42.802099 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9vpfw_kube-system(33b60d66-70dc-47d9-aa85-505e7fd31a2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9vpfw_kube-system(33b60d66-70dc-47d9-aa85-505e7fd31a2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e834cddb9116c338a2c5f5f8f868e68312179e2a8e3b8d7429334358d4bc5def\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9vpfw" podUID="33b60d66-70dc-47d9-aa85-505e7fd31a2d" Jan 23 18:43:42.804630 containerd[1598]: time="2026-01-23T18:43:42.804545909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56878495cb-t9bs5,Uid:50725488-4a1d-4f65-a7da-a4a923730733,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6eb3f5c7163cc8bfed328953709a2c4834309cb816394af1afba7512082c9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.805550 kubelet[2814]: E0123 18:43:42.805523 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6eb3f5c7163cc8bfed328953709a2c4834309cb816394af1afba7512082c9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.805675 kubelet[2814]: E0123 18:43:42.805556 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6eb3f5c7163cc8bfed328953709a2c4834309cb816394af1afba7512082c9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" Jan 23 18:43:42.805675 kubelet[2814]: E0123 18:43:42.805573 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6eb3f5c7163cc8bfed328953709a2c4834309cb816394af1afba7512082c9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" Jan 23 18:43:42.805675 kubelet[2814]: E0123 18:43:42.805601 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56878495cb-t9bs5_calico-apiserver(50725488-4a1d-4f65-a7da-a4a923730733)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56878495cb-t9bs5_calico-apiserver(50725488-4a1d-4f65-a7da-a4a923730733)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6eb3f5c7163cc8bfed328953709a2c4834309cb816394af1afba7512082c9d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:43:42.829520 containerd[1598]: time="2026-01-23T18:43:42.829424629Z" level=error msg="Failed to destroy network for sandbox \"7ae2611eef368a0615de1421b3c5c3dc5549724047c4be9da7edea27f6f96d30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.832934 containerd[1598]: time="2026-01-23T18:43:42.832629568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb854b445-fcpcc,Uid:a2fec986-96f7-4105-9373-012c1fac3001,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae2611eef368a0615de1421b3c5c3dc5549724047c4be9da7edea27f6f96d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.834928 kubelet[2814]: E0123 18:43:42.834877 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae2611eef368a0615de1421b3c5c3dc5549724047c4be9da7edea27f6f96d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.835803 kubelet[2814]: E0123 18:43:42.835552 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae2611eef368a0615de1421b3c5c3dc5549724047c4be9da7edea27f6f96d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bb854b445-fcpcc" Jan 23 18:43:42.835803 kubelet[2814]: E0123 18:43:42.835582 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae2611eef368a0615de1421b3c5c3dc5549724047c4be9da7edea27f6f96d30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bb854b445-fcpcc" Jan 23 18:43:42.836801 kubelet[2814]: E0123 18:43:42.836357 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bb854b445-fcpcc_calico-system(a2fec986-96f7-4105-9373-012c1fac3001)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bb854b445-fcpcc_calico-system(a2fec986-96f7-4105-9373-012c1fac3001)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ae2611eef368a0615de1421b3c5c3dc5549724047c4be9da7edea27f6f96d30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bb854b445-fcpcc" podUID="a2fec986-96f7-4105-9373-012c1fac3001" Jan 23 18:43:42.856838 containerd[1598]: time="2026-01-23T18:43:42.856685493Z" level=error msg="Failed to destroy network for sandbox \"95704af4bc50d7bdb96778d663f112dd1a4c0a799c4042b414750f09ff6dd4db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.863924 containerd[1598]: time="2026-01-23T18:43:42.863876745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5dcz,Uid:ac222387-3b7e-4f68-972a-ec412c252e8d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95704af4bc50d7bdb96778d663f112dd1a4c0a799c4042b414750f09ff6dd4db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.864226 kubelet[2814]: E0123 18:43:42.864129 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95704af4bc50d7bdb96778d663f112dd1a4c0a799c4042b414750f09ff6dd4db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.864226 kubelet[2814]: E0123 18:43:42.864195 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95704af4bc50d7bdb96778d663f112dd1a4c0a799c4042b414750f09ff6dd4db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5dcz" Jan 23 18:43:42.864226 kubelet[2814]: E0123 18:43:42.864217 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95704af4bc50d7bdb96778d663f112dd1a4c0a799c4042b414750f09ff6dd4db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5dcz" Jan 23 18:43:42.865120 kubelet[2814]: E0123 18:43:42.864996 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5dcz_kube-system(ac222387-3b7e-4f68-972a-ec412c252e8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5dcz_kube-system(ac222387-3b7e-4f68-972a-ec412c252e8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95704af4bc50d7bdb96778d663f112dd1a4c0a799c4042b414750f09ff6dd4db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5dcz" podUID="ac222387-3b7e-4f68-972a-ec412c252e8d" Jan 23 18:43:42.865902 containerd[1598]: time="2026-01-23T18:43:42.865866045Z" level=error msg="Failed to destroy network for sandbox \"e6b4d61703dad713aa732afed33a9e858f9d78588bc4f407bf0dabb505bda4a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.873025 containerd[1598]: time="2026-01-23T18:43:42.872912857Z" level=error msg="Failed to destroy network for sandbox \"bc45d6785764785e1f8fa41161e027eaa3463ef0991c905ab966742485e22d9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.875862 containerd[1598]: time="2026-01-23T18:43:42.874415587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bd45f567-rc4xx,Uid:4067c734-cff1-4419-879a-3fc371d855f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6b4d61703dad713aa732afed33a9e858f9d78588bc4f407bf0dabb505bda4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.875997 kubelet[2814]: E0123 18:43:42.874806 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6b4d61703dad713aa732afed33a9e858f9d78588bc4f407bf0dabb505bda4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.875997 kubelet[2814]: E0123 18:43:42.874850 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6b4d61703dad713aa732afed33a9e858f9d78588bc4f407bf0dabb505bda4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" Jan 23 18:43:42.875997 kubelet[2814]: E0123 18:43:42.874867 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6b4d61703dad713aa732afed33a9e858f9d78588bc4f407bf0dabb505bda4a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" Jan 23 18:43:42.876136 kubelet[2814]: E0123 18:43:42.874901 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bd45f567-rc4xx_calico-apiserver(4067c734-cff1-4419-879a-3fc371d855f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bd45f567-rc4xx_calico-apiserver(4067c734-cff1-4419-879a-3fc371d855f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6b4d61703dad713aa732afed33a9e858f9d78588bc4f407bf0dabb505bda4a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:43:42.876512 containerd[1598]: time="2026-01-23T18:43:42.876426217Z" level=error msg="Failed to destroy network for sandbox \"bcdeb9a5d37ad3beace68604649ec5a1eaac1bbf45776a76d3ef8d50c2470ff2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.877669 containerd[1598]: time="2026-01-23T18:43:42.877490153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-276fc,Uid:72e54e47-91e4-415c-876e-aa36180ac3b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc45d6785764785e1f8fa41161e027eaa3463ef0991c905ab966742485e22d9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.878664 kubelet[2814]: E0123 18:43:42.877865 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc45d6785764785e1f8fa41161e027eaa3463ef0991c905ab966742485e22d9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.878664 kubelet[2814]: E0123 18:43:42.877960 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc45d6785764785e1f8fa41161e027eaa3463ef0991c905ab966742485e22d9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-276fc" Jan 23 18:43:42.878664 kubelet[2814]: E0123 18:43:42.877993 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc45d6785764785e1f8fa41161e027eaa3463ef0991c905ab966742485e22d9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-276fc" Jan 23 18:43:42.878814 kubelet[2814]: E0123 18:43:42.878048 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-276fc_calico-system(72e54e47-91e4-415c-876e-aa36180ac3b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-276fc_calico-system(72e54e47-91e4-415c-876e-aa36180ac3b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc45d6785764785e1f8fa41161e027eaa3463ef0991c905ab966742485e22d9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:43:42.879981 containerd[1598]: time="2026-01-23T18:43:42.879854416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56878495cb-jls4r,Uid:2647b35f-a248-488d-8f41-2052dd32f727,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcdeb9a5d37ad3beace68604649ec5a1eaac1bbf45776a76d3ef8d50c2470ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.880193 kubelet[2814]: E0123 18:43:42.880150 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcdeb9a5d37ad3beace68604649ec5a1eaac1bbf45776a76d3ef8d50c2470ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.880247 kubelet[2814]: E0123 18:43:42.880202 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcdeb9a5d37ad3beace68604649ec5a1eaac1bbf45776a76d3ef8d50c2470ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" Jan 23 18:43:42.880247 kubelet[2814]: E0123 18:43:42.880225 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcdeb9a5d37ad3beace68604649ec5a1eaac1bbf45776a76d3ef8d50c2470ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" Jan 23 18:43:42.880583 kubelet[2814]: E0123 18:43:42.880401 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56878495cb-jls4r_calico-apiserver(2647b35f-a248-488d-8f41-2052dd32f727)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56878495cb-jls4r_calico-apiserver(2647b35f-a248-488d-8f41-2052dd32f727)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcdeb9a5d37ad3beace68604649ec5a1eaac1bbf45776a76d3ef8d50c2470ff2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:43:42.886174 containerd[1598]: time="2026-01-23T18:43:42.886139647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bdcd99c5b-6vx2x,Uid:c6f4bf65-2b8c-4712-a434-da7d69d938c0,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:42.961781 containerd[1598]: time="2026-01-23T18:43:42.961657842Z" level=error msg="Failed to destroy network for sandbox \"59bc109c8b8507246f49eaa0010f41b5e15fca633e39609a24c3e4d06895747f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.965104 containerd[1598]: time="2026-01-23T18:43:42.965021869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bdcd99c5b-6vx2x,Uid:c6f4bf65-2b8c-4712-a434-da7d69d938c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59bc109c8b8507246f49eaa0010f41b5e15fca633e39609a24c3e4d06895747f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.965672 kubelet[2814]: E0123 18:43:42.965593 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59bc109c8b8507246f49eaa0010f41b5e15fca633e39609a24c3e4d06895747f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:42.965672 kubelet[2814]: E0123 18:43:42.965691 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59bc109c8b8507246f49eaa0010f41b5e15fca633e39609a24c3e4d06895747f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" Jan 23 18:43:42.965672 kubelet[2814]: E0123 18:43:42.965726 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59bc109c8b8507246f49eaa0010f41b5e15fca633e39609a24c3e4d06895747f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" Jan 23 18:43:42.966001 kubelet[2814]: E0123 18:43:42.965801 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bdcd99c5b-6vx2x_calico-system(c6f4bf65-2b8c-4712-a434-da7d69d938c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bdcd99c5b-6vx2x_calico-system(c6f4bf65-2b8c-4712-a434-da7d69d938c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59bc109c8b8507246f49eaa0010f41b5e15fca633e39609a24c3e4d06895747f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:43:43.133379 systemd[1]: Created slice kubepods-besteffort-podf72bd6e0_6290_4ad0_99d3_a580eaff8fda.slice - libcontainer container kubepods-besteffort-podf72bd6e0_6290_4ad0_99d3_a580eaff8fda.slice. Jan 23 18:43:43.137521 containerd[1598]: time="2026-01-23T18:43:43.137468293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2smd,Uid:f72bd6e0-6290-4ad0-99d3-a580eaff8fda,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:43.212543 containerd[1598]: time="2026-01-23T18:43:43.212404865Z" level=error msg="Failed to destroy network for sandbox \"73fec54dc2db4f8eef6cd5c3be01eff3f1ed59b60b79b34b35643e49de341524\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:43.216167 containerd[1598]: time="2026-01-23T18:43:43.216038293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2smd,Uid:f72bd6e0-6290-4ad0-99d3-a580eaff8fda,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fec54dc2db4f8eef6cd5c3be01eff3f1ed59b60b79b34b35643e49de341524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:43.216547 kubelet[2814]: E0123 18:43:43.216498 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fec54dc2db4f8eef6cd5c3be01eff3f1ed59b60b79b34b35643e49de341524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:43:43.217010 kubelet[2814]: E0123 18:43:43.216559 2814 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fec54dc2db4f8eef6cd5c3be01eff3f1ed59b60b79b34b35643e49de341524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2smd" Jan 23 18:43:43.217010 kubelet[2814]: E0123 18:43:43.216592 2814 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fec54dc2db4f8eef6cd5c3be01eff3f1ed59b60b79b34b35643e49de341524\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2smd" Jan 23 18:43:43.217010 kubelet[2814]: E0123 18:43:43.216645 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2smd_calico-system(f72bd6e0-6290-4ad0-99d3-a580eaff8fda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2smd_calico-system(f72bd6e0-6290-4ad0-99d3-a580eaff8fda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73fec54dc2db4f8eef6cd5c3be01eff3f1ed59b60b79b34b35643e49de341524\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:43:43.325086 kubelet[2814]: E0123 18:43:43.325000 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:43.326200 containerd[1598]: time="2026-01-23T18:43:43.325924733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:43:48.300939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857030023.mount: Deactivated successfully. Jan 23 18:43:48.551029 containerd[1598]: time="2026-01-23T18:43:48.550949305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 23 18:43:48.554717 containerd[1598]: time="2026-01-23T18:43:48.554580560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:48.555230 containerd[1598]: time="2026-01-23T18:43:48.555168284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.229201936s" Jan 23 18:43:48.555438 containerd[1598]: time="2026-01-23T18:43:48.555232152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:43:48.555869 containerd[1598]: time="2026-01-23T18:43:48.555810256Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:48.556422 containerd[1598]: time="2026-01-23T18:43:48.556371976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:43:48.572148 containerd[1598]: time="2026-01-23T18:43:48.572073459Z" level=info msg="CreateContainer within sandbox \"58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:43:48.593770 containerd[1598]: time="2026-01-23T18:43:48.593579936Z" level=info msg="Container b0e35841c3e776e6378f64c74864c2ad80464395e2e39b9040065b5e9f763a8b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:48.608359 containerd[1598]: time="2026-01-23T18:43:48.608181411Z" level=info msg="CreateContainer within sandbox \"58183ba3903c1d3e41826751f28aeffe78f0a81b72de7a528b39cce516710344\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0e35841c3e776e6378f64c74864c2ad80464395e2e39b9040065b5e9f763a8b\"" Jan 23 18:43:48.609387 containerd[1598]: time="2026-01-23T18:43:48.609057877Z" level=info msg="StartContainer for \"b0e35841c3e776e6378f64c74864c2ad80464395e2e39b9040065b5e9f763a8b\"" Jan 23 18:43:48.611618 containerd[1598]: time="2026-01-23T18:43:48.611512613Z" level=info msg="connecting to shim b0e35841c3e776e6378f64c74864c2ad80464395e2e39b9040065b5e9f763a8b" address="unix:///run/containerd/s/394109fcac406dfec01a8db7bff7765bbe394ed5f938983b901c4e6b0a607640" protocol=ttrpc version=3 Jan 23 18:43:48.707663 systemd[1]: Started cri-containerd-b0e35841c3e776e6378f64c74864c2ad80464395e2e39b9040065b5e9f763a8b.scope - libcontainer container b0e35841c3e776e6378f64c74864c2ad80464395e2e39b9040065b5e9f763a8b. Jan 23 18:43:48.804000 audit: BPF prog-id=172 op=LOAD Jan 23 18:43:48.806745 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 23 18:43:48.806871 kernel: audit: type=1334 audit(1769193828.804:572): prog-id=172 op=LOAD Jan 23 18:43:48.804000 audit[3916]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3334 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:48.821047 kernel: audit: type=1300 audit(1769193828.804:572): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3334 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:48.821130 kernel: audit: type=1327 audit(1769193828.804:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653335383431633365373736653633373866363463373438363463 Jan 23 18:43:48.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653335383431633365373736653633373866363463373438363463 Jan 23 18:43:48.832443 kernel: audit: type=1334 audit(1769193828.804:573): prog-id=173 op=LOAD Jan 23 18:43:48.804000 audit: BPF prog-id=173 op=LOAD Jan 23 18:43:48.804000 audit[3916]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3334 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:48.847754 kernel: audit: type=1300 audit(1769193828.804:573): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3334 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:48.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653335383431633365373736653633373866363463373438363463 Jan 23 18:43:48.804000 audit: BPF prog-id=173 op=UNLOAD Jan 23 18:43:48.864097 kernel: audit: type=1327 audit(1769193828.804:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653335383431633365373736653633373866363463373438363463 Jan 23 18:43:48.864207 kernel: audit: type=1334 audit(1769193828.804:574): prog-id=173 op=UNLOAD Jan 23 18:43:48.864325 kernel: audit: type=1300 audit(1769193828.804:574): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:48.804000 audit[3916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:48.872374 containerd[1598]: time="2026-01-23T18:43:48.872239655Z" level=info msg="StartContainer for \"b0e35841c3e776e6378f64c74864c2ad80464395e2e39b9040065b5e9f763a8b\" returns successfully" Jan 23 18:43:48.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653335383431633365373736653633373866363463373438363463 Jan 23 18:43:48.886533 kernel: audit: type=1327 audit(1769193828.804:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653335383431633365373736653633373866363463373438363463 Jan 23 18:43:48.886739 kernel: audit: type=1334 audit(1769193828.804:575): prog-id=172 op=UNLOAD Jan 23 18:43:48.804000 audit: BPF prog-id=172 op=UNLOAD Jan 23 18:43:48.804000 audit[3916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3334 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:48.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653335383431633365373736653633373866363463373438363463 Jan 23 18:43:48.804000 audit: BPF prog-id=174 op=LOAD Jan 23 18:43:48.804000 audit[3916]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3334 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:48.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230653335383431633365373736653633373866363463373438363463 Jan 23 18:43:49.013176 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:43:49.013403 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:43:49.351703 kubelet[2814]: E0123 18:43:49.351085 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:49.362835 kubelet[2814]: I0123 18:43:49.362657 2814 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsj27\" (UniqueName: \"kubernetes.io/projected/a2fec986-96f7-4105-9373-012c1fac3001-kube-api-access-dsj27\") pod \"a2fec986-96f7-4105-9373-012c1fac3001\" (UID: \"a2fec986-96f7-4105-9373-012c1fac3001\") " Jan 23 18:43:49.362835 kubelet[2814]: I0123 18:43:49.362770 2814 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2fec986-96f7-4105-9373-012c1fac3001-whisker-backend-key-pair\") pod \"a2fec986-96f7-4105-9373-012c1fac3001\" (UID: \"a2fec986-96f7-4105-9373-012c1fac3001\") " Jan 23 18:43:49.363480 kubelet[2814]: I0123 18:43:49.362962 2814 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2fec986-96f7-4105-9373-012c1fac3001-whisker-ca-bundle\") pod \"a2fec986-96f7-4105-9373-012c1fac3001\" (UID: \"a2fec986-96f7-4105-9373-012c1fac3001\") " Jan 23 18:43:49.364075 kubelet[2814]: I0123 18:43:49.364023 2814 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2fec986-96f7-4105-9373-012c1fac3001-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a2fec986-96f7-4105-9373-012c1fac3001" (UID: "a2fec986-96f7-4105-9373-012c1fac3001"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:43:49.376914 kubelet[2814]: I0123 18:43:49.376867 2814 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fec986-96f7-4105-9373-012c1fac3001-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a2fec986-96f7-4105-9373-012c1fac3001" (UID: "a2fec986-96f7-4105-9373-012c1fac3001"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:43:49.378908 systemd[1]: var-lib-kubelet-pods-a2fec986\x2d96f7\x2d4105\x2d9373\x2d012c1fac3001-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:43:49.387940 kubelet[2814]: I0123 18:43:49.384843 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-72rj2" podStartSLOduration=1.974264147 podStartE2EDuration="14.384818624s" podCreationTimestamp="2026-01-23 18:43:35 +0000 UTC" firstStartedPulling="2026-01-23 18:43:36.14609576 +0000 UTC m=+20.283664913" lastFinishedPulling="2026-01-23 18:43:48.556650227 +0000 UTC m=+32.694219390" observedRunningTime="2026-01-23 18:43:49.38045975 +0000 UTC m=+33.518028912" watchObservedRunningTime="2026-01-23 18:43:49.384818624 +0000 UTC m=+33.522387787" Jan 23 18:43:49.388522 kubelet[2814]: I0123 18:43:49.388466 2814 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fec986-96f7-4105-9373-012c1fac3001-kube-api-access-dsj27" (OuterVolumeSpecName: "kube-api-access-dsj27") pod "a2fec986-96f7-4105-9373-012c1fac3001" (UID: "a2fec986-96f7-4105-9373-012c1fac3001"). InnerVolumeSpecName "kube-api-access-dsj27". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:43:49.388601 systemd[1]: var-lib-kubelet-pods-a2fec986\x2d96f7\x2d4105\x2d9373\x2d012c1fac3001-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddsj27.mount: Deactivated successfully. Jan 23 18:43:49.464166 kubelet[2814]: I0123 18:43:49.463730 2814 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2fec986-96f7-4105-9373-012c1fac3001-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 18:43:49.464166 kubelet[2814]: I0123 18:43:49.463770 2814 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsj27\" (UniqueName: \"kubernetes.io/projected/a2fec986-96f7-4105-9373-012c1fac3001-kube-api-access-dsj27\") on node \"localhost\" DevicePath \"\"" Jan 23 18:43:49.464166 kubelet[2814]: I0123 18:43:49.463783 2814 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2fec986-96f7-4105-9373-012c1fac3001-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 18:43:49.657907 systemd[1]: Removed slice kubepods-besteffort-poda2fec986_96f7_4105_9373_012c1fac3001.slice - libcontainer container kubepods-besteffort-poda2fec986_96f7_4105_9373_012c1fac3001.slice. Jan 23 18:43:49.727808 systemd[1]: Created slice kubepods-besteffort-pod934ceaea_a5ec_4119_99e0_f63128ff37ad.slice - libcontainer container kubepods-besteffort-pod934ceaea_a5ec_4119_99e0_f63128ff37ad.slice. Jan 23 18:43:49.866876 kubelet[2814]: I0123 18:43:49.866708 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkf2v\" (UniqueName: \"kubernetes.io/projected/934ceaea-a5ec-4119-99e0-f63128ff37ad-kube-api-access-rkf2v\") pod \"whisker-d457c8689-kch4w\" (UID: \"934ceaea-a5ec-4119-99e0-f63128ff37ad\") " pod="calico-system/whisker-d457c8689-kch4w" Jan 23 18:43:49.866876 kubelet[2814]: I0123 18:43:49.866767 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/934ceaea-a5ec-4119-99e0-f63128ff37ad-whisker-backend-key-pair\") pod \"whisker-d457c8689-kch4w\" (UID: \"934ceaea-a5ec-4119-99e0-f63128ff37ad\") " pod="calico-system/whisker-d457c8689-kch4w" Jan 23 18:43:49.866876 kubelet[2814]: I0123 18:43:49.866799 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/934ceaea-a5ec-4119-99e0-f63128ff37ad-whisker-ca-bundle\") pod \"whisker-d457c8689-kch4w\" (UID: \"934ceaea-a5ec-4119-99e0-f63128ff37ad\") " pod="calico-system/whisker-d457c8689-kch4w" Jan 23 18:43:50.034536 containerd[1598]: time="2026-01-23T18:43:50.033772618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d457c8689-kch4w,Uid:934ceaea-a5ec-4119-99e0-f63128ff37ad,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:50.127693 kubelet[2814]: I0123 18:43:50.127588 2814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2fec986-96f7-4105-9373-012c1fac3001" path="/var/lib/kubelet/pods/a2fec986-96f7-4105-9373-012c1fac3001/volumes" Jan 23 18:43:50.278375 systemd-networkd[1511]: cali1c858c8d549: Link UP Jan 23 18:43:50.278726 systemd-networkd[1511]: cali1c858c8d549: Gained carrier Jan 23 18:43:50.297932 containerd[1598]: 2026-01-23 18:43:50.073 [INFO][4034] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:43:50.297932 containerd[1598]: 2026-01-23 18:43:50.101 [INFO][4034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--d457c8689--kch4w-eth0 whisker-d457c8689- calico-system 934ceaea-a5ec-4119-99e0-f63128ff37ad 958 0 2026-01-23 18:43:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d457c8689 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-d457c8689-kch4w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1c858c8d549 [] [] }} ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Namespace="calico-system" Pod="whisker-d457c8689-kch4w" WorkloadEndpoint="localhost-k8s-whisker--d457c8689--kch4w-" Jan 23 18:43:50.297932 containerd[1598]: 2026-01-23 18:43:50.102 [INFO][4034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Namespace="calico-system" Pod="whisker-d457c8689-kch4w" WorkloadEndpoint="localhost-k8s-whisker--d457c8689--kch4w-eth0" Jan 23 18:43:50.297932 containerd[1598]: 2026-01-23 18:43:50.220 [INFO][4049] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" HandleID="k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Workload="localhost-k8s-whisker--d457c8689--kch4w-eth0" Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.221 [INFO][4049] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" HandleID="k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Workload="localhost-k8s-whisker--d457c8689--kch4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000509690), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-d457c8689-kch4w", "timestamp":"2026-01-23 18:43:50.220742679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.221 [INFO][4049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.222 [INFO][4049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.222 [INFO][4049] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.232 [INFO][4049] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" host="localhost" Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.240 [INFO][4049] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.245 [INFO][4049] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.247 [INFO][4049] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.249 [INFO][4049] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:50.298426 containerd[1598]: 2026-01-23 18:43:50.249 [INFO][4049] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" host="localhost" Jan 23 18:43:50.298774 containerd[1598]: 2026-01-23 18:43:50.251 [INFO][4049] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b Jan 23 18:43:50.298774 containerd[1598]: 2026-01-23 18:43:50.257 [INFO][4049] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" host="localhost" Jan 23 18:43:50.298774 containerd[1598]: 2026-01-23 18:43:50.262 [INFO][4049] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" host="localhost" Jan 23 18:43:50.298774 containerd[1598]: 2026-01-23 18:43:50.262 [INFO][4049] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" host="localhost" Jan 23 18:43:50.298774 containerd[1598]: 2026-01-23 18:43:50.262 [INFO][4049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:50.298774 containerd[1598]: 2026-01-23 18:43:50.262 [INFO][4049] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" HandleID="k8s-pod-network.0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Workload="localhost-k8s-whisker--d457c8689--kch4w-eth0" Jan 23 18:43:50.298930 containerd[1598]: 2026-01-23 18:43:50.265 [INFO][4034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Namespace="calico-system" Pod="whisker-d457c8689-kch4w" WorkloadEndpoint="localhost-k8s-whisker--d457c8689--kch4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d457c8689--kch4w-eth0", GenerateName:"whisker-d457c8689-", Namespace:"calico-system", SelfLink:"", UID:"934ceaea-a5ec-4119-99e0-f63128ff37ad", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d457c8689", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-d457c8689-kch4w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1c858c8d549", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:50.298930 containerd[1598]: 2026-01-23 18:43:50.265 [INFO][4034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Namespace="calico-system" Pod="whisker-d457c8689-kch4w" WorkloadEndpoint="localhost-k8s-whisker--d457c8689--kch4w-eth0" Jan 23 18:43:50.299054 containerd[1598]: 2026-01-23 18:43:50.265 [INFO][4034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c858c8d549 ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Namespace="calico-system" Pod="whisker-d457c8689-kch4w" WorkloadEndpoint="localhost-k8s-whisker--d457c8689--kch4w-eth0" Jan 23 18:43:50.299054 containerd[1598]: 2026-01-23 18:43:50.278 [INFO][4034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Namespace="calico-system" Pod="whisker-d457c8689-kch4w" WorkloadEndpoint="localhost-k8s-whisker--d457c8689--kch4w-eth0" Jan 23 18:43:50.299114 containerd[1598]: 2026-01-23 18:43:50.281 [INFO][4034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Namespace="calico-system" Pod="whisker-d457c8689-kch4w" WorkloadEndpoint="localhost-k8s-whisker--d457c8689--kch4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d457c8689--kch4w-eth0", GenerateName:"whisker-d457c8689-", Namespace:"calico-system", SelfLink:"", UID:"934ceaea-a5ec-4119-99e0-f63128ff37ad", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d457c8689", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b", Pod:"whisker-d457c8689-kch4w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1c858c8d549", MAC:"6e:d6:cc:70:19:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:50.299302 containerd[1598]: 2026-01-23 18:43:50.294 [INFO][4034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" Namespace="calico-system" Pod="whisker-d457c8689-kch4w" WorkloadEndpoint="localhost-k8s-whisker--d457c8689--kch4w-eth0" Jan 23 18:43:50.351930 kubelet[2814]: E0123 18:43:50.351851 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:50.430195 containerd[1598]: time="2026-01-23T18:43:50.430083823Z" level=info msg="connecting to shim 0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b" address="unix:///run/containerd/s/7e8d6035882f110138ebeab1bfebca40193b4d1c114b15bd1f48f84a2aebe282" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:50.479601 systemd[1]: Started cri-containerd-0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b.scope - libcontainer container 0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b. Jan 23 18:43:50.545000 audit: BPF prog-id=175 op=LOAD Jan 23 18:43:50.550000 audit: BPF prog-id=176 op=LOAD Jan 23 18:43:50.550000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4099 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343864616530313939623762626232333364653965343465313532 Jan 23 18:43:50.550000 audit: BPF prog-id=176 op=UNLOAD Jan 23 18:43:50.550000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4099 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343864616530313939623762626232333364653965343465313532 Jan 23 18:43:50.554000 audit: BPF prog-id=177 op=LOAD Jan 23 18:43:50.554000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4099 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343864616530313939623762626232333364653965343465313532 Jan 23 18:43:50.555000 audit: BPF prog-id=178 op=LOAD Jan 23 18:43:50.555000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4099 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343864616530313939623762626232333364653965343465313532 Jan 23 18:43:50.555000 audit: BPF prog-id=178 op=UNLOAD Jan 23 18:43:50.555000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4099 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343864616530313939623762626232333364653965343465313532 Jan 23 18:43:50.555000 audit: BPF prog-id=177 op=UNLOAD Jan 23 18:43:50.555000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4099 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343864616530313939623762626232333364653965343465313532 Jan 23 18:43:50.555000 audit: BPF prog-id=179 op=LOAD Jan 23 18:43:50.555000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4099 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030343864616530313939623762626232333364653965343465313532 Jan 23 18:43:50.558245 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:50.677837 containerd[1598]: time="2026-01-23T18:43:50.677754220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d457c8689-kch4w,Uid:934ceaea-a5ec-4119-99e0-f63128ff37ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"0048dae0199b7bbb233de9e44e152f56fa334e21ada166527058f18827a8d79b\"" Jan 23 18:43:50.682189 containerd[1598]: time="2026-01-23T18:43:50.682101074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:43:50.752385 containerd[1598]: time="2026-01-23T18:43:50.752096857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:50.753939 containerd[1598]: time="2026-01-23T18:43:50.753837264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:43:50.754038 containerd[1598]: time="2026-01-23T18:43:50.753967654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:50.755031 kubelet[2814]: E0123 18:43:50.754754 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:43:50.755031 kubelet[2814]: E0123 18:43:50.754837 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:43:50.762987 kubelet[2814]: E0123 18:43:50.762674 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9ffb2c439af2483eba9ce9f173bc31b4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkf2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d457c8689-kch4w_calico-system(934ceaea-a5ec-4119-99e0-f63128ff37ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:50.770639 containerd[1598]: time="2026-01-23T18:43:50.769759055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:43:50.842724 containerd[1598]: time="2026-01-23T18:43:50.842015816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:50.846610 containerd[1598]: time="2026-01-23T18:43:50.846546350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:43:50.846940 containerd[1598]: time="2026-01-23T18:43:50.846634959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:50.847004 kubelet[2814]: E0123 18:43:50.846868 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:43:50.847247 kubelet[2814]: E0123 18:43:50.847077 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:43:50.847997 kubelet[2814]: E0123 18:43:50.847418 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkf2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d457c8689-kch4w_calico-system(934ceaea-a5ec-4119-99e0-f63128ff37ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:50.849853 kubelet[2814]: E0123 18:43:50.849777 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:43:50.877000 audit: BPF prog-id=180 op=LOAD Jan 23 18:43:50.877000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6ee8a980 a2=98 a3=1fffffffffffffff items=0 ppid=4147 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.877000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:43:50.878000 audit: BPF prog-id=180 op=UNLOAD Jan 23 18:43:50.878000 audit[4264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc6ee8a950 a3=0 items=0 ppid=4147 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.878000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:43:50.878000 audit: BPF prog-id=181 op=LOAD Jan 23 18:43:50.878000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6ee8a860 a2=94 a3=3 items=0 ppid=4147 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.878000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:43:50.878000 audit: BPF prog-id=181 op=UNLOAD Jan 23 18:43:50.878000 audit[4264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6ee8a860 a2=94 a3=3 items=0 ppid=4147 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.878000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:43:50.878000 audit: BPF prog-id=182 op=LOAD Jan 23 18:43:50.878000 audit[4264]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6ee8a8a0 a2=94 a3=7ffc6ee8aa80 items=0 ppid=4147 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.878000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:43:50.878000 audit: BPF prog-id=182 op=UNLOAD Jan 23 18:43:50.878000 audit[4264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6ee8a8a0 a2=94 a3=7ffc6ee8aa80 items=0 ppid=4147 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.878000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 18:43:50.881000 audit: BPF prog-id=183 op=LOAD Jan 23 18:43:50.881000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6d31b560 a2=98 a3=3 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:50.881000 audit: BPF prog-id=183 op=UNLOAD Jan 23 18:43:50.881000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6d31b530 a3=0 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:50.881000 audit: BPF prog-id=184 op=LOAD Jan 23 18:43:50.881000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6d31b350 a2=94 a3=54428f items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:50.881000 audit: BPF prog-id=184 op=UNLOAD Jan 23 18:43:50.881000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6d31b350 a2=94 a3=54428f items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:50.881000 audit: BPF prog-id=185 op=LOAD Jan 23 18:43:50.881000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6d31b380 a2=94 a3=2 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:50.881000 audit: BPF prog-id=185 op=UNLOAD Jan 23 18:43:50.881000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6d31b380 a2=0 a3=2 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:50.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.106000 audit: BPF prog-id=186 op=LOAD Jan 23 18:43:51.106000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6d31b240 a2=94 a3=1 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.106000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.106000 audit: BPF prog-id=186 op=UNLOAD Jan 23 18:43:51.106000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6d31b240 a2=94 a3=1 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.106000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.119000 audit: BPF prog-id=187 op=LOAD Jan 23 18:43:51.119000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6d31b230 a2=94 a3=4 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.119000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.119000 audit: BPF prog-id=187 op=UNLOAD Jan 23 18:43:51.119000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6d31b230 a2=0 a3=4 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.119000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.120000 audit: BPF prog-id=188 op=LOAD Jan 23 18:43:51.120000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6d31b090 a2=94 a3=5 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.120000 audit: BPF prog-id=188 op=UNLOAD Jan 23 18:43:51.120000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6d31b090 a2=0 a3=5 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.120000 audit: BPF prog-id=189 op=LOAD Jan 23 18:43:51.120000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6d31b2b0 a2=94 a3=6 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.120000 audit: BPF prog-id=189 op=UNLOAD Jan 23 18:43:51.120000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6d31b2b0 a2=0 a3=6 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.120000 audit: BPF prog-id=190 op=LOAD Jan 23 18:43:51.120000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6d31aa60 a2=94 a3=88 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.120000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.121000 audit: BPF prog-id=191 op=LOAD Jan 23 18:43:51.121000 audit[4265]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff6d31a8e0 a2=94 a3=2 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.121000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.121000 audit: BPF prog-id=191 op=UNLOAD Jan 23 18:43:51.121000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff6d31a910 a2=0 a3=7fff6d31aa10 items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.121000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.122000 audit: BPF prog-id=190 op=UNLOAD Jan 23 18:43:51.122000 audit[4265]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=15434d10 a2=0 a3=33b11ae4496b563a items=0 ppid=4147 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.122000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 18:43:51.135000 audit: BPF prog-id=192 op=LOAD Jan 23 18:43:51.135000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa1c198d0 a2=98 a3=1999999999999999 items=0 ppid=4147 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:43:51.135000 audit: BPF prog-id=192 op=UNLOAD Jan 23 18:43:51.135000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffa1c198a0 a3=0 items=0 ppid=4147 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:43:51.135000 audit: BPF prog-id=193 op=LOAD Jan 23 18:43:51.135000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa1c197b0 a2=94 a3=ffff items=0 ppid=4147 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:43:51.135000 audit: BPF prog-id=193 op=UNLOAD Jan 23 18:43:51.135000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffa1c197b0 a2=94 a3=ffff items=0 ppid=4147 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:43:51.135000 audit: BPF prog-id=194 op=LOAD Jan 23 18:43:51.135000 audit[4268]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa1c197f0 a2=94 a3=7fffa1c199d0 items=0 ppid=4147 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:43:51.135000 audit: BPF prog-id=194 op=UNLOAD Jan 23 18:43:51.135000 audit[4268]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffa1c197f0 a2=94 a3=7fffa1c199d0 items=0 ppid=4147 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 18:43:51.233401 systemd-networkd[1511]: vxlan.calico: Link UP Jan 23 18:43:51.233413 systemd-networkd[1511]: vxlan.calico: Gained carrier Jan 23 18:43:51.265000 audit: BPF prog-id=195 op=LOAD Jan 23 18:43:51.265000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b949bc0 a2=98 a3=0 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.265000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.265000 audit: BPF prog-id=195 op=UNLOAD Jan 23 18:43:51.265000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc4b949b90 a3=0 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.265000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.266000 audit: BPF prog-id=196 op=LOAD Jan 23 18:43:51.266000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b9499d0 a2=94 a3=54428f items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.266000 audit: BPF prog-id=196 op=UNLOAD Jan 23 18:43:51.266000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc4b9499d0 a2=94 a3=54428f items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.266000 audit: BPF prog-id=197 op=LOAD Jan 23 18:43:51.266000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4b949a00 a2=94 a3=2 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.266000 audit: BPF prog-id=197 op=UNLOAD Jan 23 18:43:51.266000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc4b949a00 a2=0 a3=2 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.266000 audit: BPF prog-id=198 op=LOAD Jan 23 18:43:51.266000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc4b9497b0 a2=94 a3=4 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.268000 audit: BPF prog-id=198 op=UNLOAD Jan 23 18:43:51.268000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc4b9497b0 a2=94 a3=4 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.268000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.268000 audit: BPF prog-id=199 op=LOAD Jan 23 18:43:51.268000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc4b9498b0 a2=94 a3=7ffc4b949a30 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.268000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.269000 audit: BPF prog-id=199 op=UNLOAD Jan 23 18:43:51.269000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc4b9498b0 a2=0 a3=7ffc4b949a30 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.269000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.277000 audit: BPF prog-id=200 op=LOAD Jan 23 18:43:51.277000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc4b948fe0 a2=94 a3=2 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.277000 audit: BPF prog-id=200 op=UNLOAD Jan 23 18:43:51.277000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc4b948fe0 a2=0 a3=2 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.278000 audit: BPF prog-id=201 op=LOAD Jan 23 18:43:51.278000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc4b9490e0 a2=94 a3=30 items=0 ppid=4147 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.278000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 18:43:51.293000 audit: BPF prog-id=202 op=LOAD Jan 23 18:43:51.293000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeae69bb50 a2=98 a3=0 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.293000 audit: BPF prog-id=202 op=UNLOAD Jan 23 18:43:51.293000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeae69bb20 a3=0 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.293000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.294000 audit: BPF prog-id=203 op=LOAD Jan 23 18:43:51.294000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeae69b940 a2=94 a3=54428f items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.294000 audit: BPF prog-id=203 op=UNLOAD Jan 23 18:43:51.294000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeae69b940 a2=94 a3=54428f items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.294000 audit: BPF prog-id=204 op=LOAD Jan 23 18:43:51.294000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeae69b970 a2=94 a3=2 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.294000 audit: BPF prog-id=204 op=UNLOAD Jan 23 18:43:51.294000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeae69b970 a2=0 a3=2 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.294000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.356380 kubelet[2814]: E0123 18:43:51.356303 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:51.360391 kubelet[2814]: E0123 18:43:51.359081 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:43:51.410000 audit[4316]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:51.410000 audit[4316]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd9f213d80 a2=0 a3=7ffd9f213d6c items=0 ppid=2926 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:51.415000 audit[4316]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:51.415000 audit[4316]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd9f213d80 a2=0 a3=0 items=0 ppid=2926 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:51.542000 audit: BPF prog-id=205 op=LOAD Jan 23 18:43:51.542000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeae69b830 a2=94 a3=1 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.542000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.542000 audit: BPF prog-id=205 op=UNLOAD Jan 23 18:43:51.542000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeae69b830 a2=94 a3=1 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.542000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.551000 audit: BPF prog-id=206 op=LOAD Jan 23 18:43:51.551000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeae69b820 a2=94 a3=4 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.551000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.551000 audit: BPF prog-id=206 op=UNLOAD Jan 23 18:43:51.551000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeae69b820 a2=0 a3=4 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.551000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.551000 audit: BPF prog-id=207 op=LOAD Jan 23 18:43:51.551000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae69b680 a2=94 a3=5 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.551000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.551000 audit: BPF prog-id=207 op=UNLOAD Jan 23 18:43:51.551000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeae69b680 a2=0 a3=5 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.551000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.551000 audit: BPF prog-id=208 op=LOAD Jan 23 18:43:51.551000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeae69b8a0 a2=94 a3=6 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.551000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.552000 audit: BPF prog-id=208 op=UNLOAD Jan 23 18:43:51.552000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeae69b8a0 a2=0 a3=6 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.552000 audit: BPF prog-id=209 op=LOAD Jan 23 18:43:51.552000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeae69b050 a2=94 a3=88 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.552000 audit: BPF prog-id=210 op=LOAD Jan 23 18:43:51.552000 audit[4302]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffeae69aed0 a2=94 a3=2 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.552000 audit: BPF prog-id=210 op=UNLOAD Jan 23 18:43:51.552000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffeae69af00 a2=0 a3=7ffeae69b000 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.553000 audit: BPF prog-id=209 op=UNLOAD Jan 23 18:43:51.553000 audit[4302]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=13d2bd10 a2=0 a3=c1e5ae7e7c09309 items=0 ppid=4147 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.553000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 18:43:51.566000 audit: BPF prog-id=201 op=UNLOAD Jan 23 18:43:51.566000 audit[4147]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000f7adc0 a2=0 a3=0 items=0 ppid=4125 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.566000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 18:43:51.644000 audit[4358]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4358 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:51.644000 audit[4358]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd3d759050 a2=0 a3=7ffd3d75903c items=0 ppid=4147 pid=4358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.644000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:51.646000 audit[4357]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:51.646000 audit[4357]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffca0750e20 a2=0 a3=7ffca0750e0c items=0 ppid=4147 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.646000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:51.648000 audit[4351]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4351 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:51.648000 audit[4351]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd06692df0 a2=0 a3=7ffd06692ddc items=0 ppid=4147 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.648000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:51.659423 systemd-networkd[1511]: cali1c858c8d549: Gained IPv6LL Jan 23 18:43:51.666000 audit[4360]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:51.666000 audit[4360]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe03474180 a2=0 a3=7ffe0347416c items=0 ppid=4147 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:51.666000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:52.360332 kubelet[2814]: E0123 18:43:52.359959 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:43:52.747529 systemd-networkd[1511]: vxlan.calico: Gained IPv6LL Jan 23 18:43:54.126206 containerd[1598]: time="2026-01-23T18:43:54.125964584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2smd,Uid:f72bd6e0-6290-4ad0-99d3-a580eaff8fda,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:54.126206 containerd[1598]: time="2026-01-23T18:43:54.126023793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-276fc,Uid:72e54e47-91e4-415c-876e-aa36180ac3b1,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:54.330404 systemd-networkd[1511]: calib11f937037b: Link UP Jan 23 18:43:54.333657 systemd-networkd[1511]: calib11f937037b: Gained carrier Jan 23 18:43:54.349942 containerd[1598]: 2026-01-23 18:43:54.213 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--276fc-eth0 goldmane-666569f655- calico-system 72e54e47-91e4-415c-876e-aa36180ac3b1 878 0 2026-01-23 18:43:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-276fc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib11f937037b [] [] }} ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Namespace="calico-system" Pod="goldmane-666569f655-276fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--276fc-" Jan 23 18:43:54.349942 containerd[1598]: 2026-01-23 18:43:54.213 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Namespace="calico-system" Pod="goldmane-666569f655-276fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--276fc-eth0" Jan 23 18:43:54.349942 containerd[1598]: 2026-01-23 18:43:54.250 [INFO][4397] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" HandleID="k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Workload="localhost-k8s-goldmane--666569f655--276fc-eth0" Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.250 [INFO][4397] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" HandleID="k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Workload="localhost-k8s-goldmane--666569f655--276fc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510a80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-276fc", "timestamp":"2026-01-23 18:43:54.250040948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.250 [INFO][4397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.250 [INFO][4397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.250 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.283 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" host="localhost" Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.293 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.301 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.304 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.307 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:54.350175 containerd[1598]: 2026-01-23 18:43:54.307 [INFO][4397] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" host="localhost" Jan 23 18:43:54.350462 containerd[1598]: 2026-01-23 18:43:54.309 [INFO][4397] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71 Jan 23 18:43:54.350462 containerd[1598]: 2026-01-23 18:43:54.314 [INFO][4397] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" host="localhost" Jan 23 18:43:54.350462 containerd[1598]: 2026-01-23 18:43:54.322 [INFO][4397] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" host="localhost" Jan 23 18:43:54.350462 containerd[1598]: 2026-01-23 18:43:54.322 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" host="localhost" Jan 23 18:43:54.350462 containerd[1598]: 2026-01-23 18:43:54.322 [INFO][4397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:54.350462 containerd[1598]: 2026-01-23 18:43:54.322 [INFO][4397] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" HandleID="k8s-pod-network.b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Workload="localhost-k8s-goldmane--666569f655--276fc-eth0" Jan 23 18:43:54.350565 containerd[1598]: 2026-01-23 18:43:54.324 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Namespace="calico-system" Pod="goldmane-666569f655-276fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--276fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--276fc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"72e54e47-91e4-415c-876e-aa36180ac3b1", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-276fc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib11f937037b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:54.350565 containerd[1598]: 2026-01-23 18:43:54.325 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Namespace="calico-system" Pod="goldmane-666569f655-276fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--276fc-eth0" Jan 23 18:43:54.350646 containerd[1598]: 2026-01-23 18:43:54.325 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib11f937037b ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Namespace="calico-system" Pod="goldmane-666569f655-276fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--276fc-eth0" Jan 23 18:43:54.350646 containerd[1598]: 2026-01-23 18:43:54.334 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Namespace="calico-system" Pod="goldmane-666569f655-276fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--276fc-eth0" Jan 23 18:43:54.350688 containerd[1598]: 2026-01-23 18:43:54.334 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Namespace="calico-system" Pod="goldmane-666569f655-276fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--276fc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--276fc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"72e54e47-91e4-415c-876e-aa36180ac3b1", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71", Pod:"goldmane-666569f655-276fc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib11f937037b", MAC:"ca:08:b6:79:b9:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:54.350752 containerd[1598]: 2026-01-23 18:43:54.346 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" Namespace="calico-system" Pod="goldmane-666569f655-276fc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--276fc-eth0" Jan 23 18:43:54.376000 audit[4423]: NETFILTER_CFG table=filter:127 family=2 entries=44 op=nft_register_chain pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:54.383425 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 23 18:43:54.383522 kernel: audit: type=1325 audit(1769193834.376:653): table=filter:127 family=2 entries=44 op=nft_register_chain pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:54.398319 containerd[1598]: time="2026-01-23T18:43:54.397342664Z" level=info msg="connecting to shim b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71" address="unix:///run/containerd/s/74f500d27d02cc108a248c920a82c18f96046a0644458be03cbbef8e47540e4e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:54.400403 kernel: audit: type=1300 audit(1769193834.376:653): arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffcc9a9ef20 a2=0 a3=7ffcc9a9ef0c items=0 ppid=4147 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.376000 audit[4423]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffcc9a9ef20 a2=0 a3=7ffcc9a9ef0c items=0 ppid=4147 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.376000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:54.421344 kernel: audit: type=1327 audit(1769193834.376:653): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:54.454860 systemd[1]: Started cri-containerd-b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71.scope - libcontainer container b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71. Jan 23 18:43:54.464793 systemd-networkd[1511]: cali5393cac5de3: Link UP Jan 23 18:43:54.466395 systemd-networkd[1511]: cali5393cac5de3: Gained carrier Jan 23 18:43:54.474000 audit: BPF prog-id=211 op=LOAD Jan 23 18:43:54.480351 kernel: audit: type=1334 audit(1769193834.474:654): prog-id=211 op=LOAD Jan 23 18:43:54.480976 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:54.488072 kernel: audit: type=1334 audit(1769193834.475:655): prog-id=212 op=LOAD Jan 23 18:43:54.475000 audit: BPF prog-id=212 op=LOAD Jan 23 18:43:54.475000 audit[4444]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.494556 containerd[1598]: 2026-01-23 18:43:54.218 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w2smd-eth0 csi-node-driver- calico-system f72bd6e0-6290-4ad0-99d3-a580eaff8fda 772 0 2026-01-23 18:43:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w2smd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5393cac5de3 [] [] }} ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Namespace="calico-system" Pod="csi-node-driver-w2smd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w2smd-" Jan 23 18:43:54.494556 containerd[1598]: 2026-01-23 18:43:54.218 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Namespace="calico-system" Pod="csi-node-driver-w2smd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w2smd-eth0" Jan 23 18:43:54.494556 containerd[1598]: 2026-01-23 18:43:54.297 [INFO][4399] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" HandleID="k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Workload="localhost-k8s-csi--node--driver--w2smd-eth0" Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.297 [INFO][4399] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" HandleID="k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Workload="localhost-k8s-csi--node--driver--w2smd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w2smd", "timestamp":"2026-01-23 18:43:54.297055223 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.297 [INFO][4399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.322 [INFO][4399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.322 [INFO][4399] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.372 [INFO][4399] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" host="localhost" Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.422 [INFO][4399] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.430 [INFO][4399] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.434 [INFO][4399] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.437 [INFO][4399] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:54.494813 containerd[1598]: 2026-01-23 18:43:54.437 [INFO][4399] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" host="localhost" Jan 23 18:43:54.495070 containerd[1598]: 2026-01-23 18:43:54.440 [INFO][4399] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e Jan 23 18:43:54.495070 containerd[1598]: 2026-01-23 18:43:54.446 [INFO][4399] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" host="localhost" Jan 23 18:43:54.495070 containerd[1598]: 2026-01-23 18:43:54.456 [INFO][4399] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" host="localhost" Jan 23 18:43:54.495070 containerd[1598]: 2026-01-23 18:43:54.456 [INFO][4399] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" host="localhost" Jan 23 18:43:54.495070 containerd[1598]: 2026-01-23 18:43:54.456 [INFO][4399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:54.495070 containerd[1598]: 2026-01-23 18:43:54.456 [INFO][4399] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" HandleID="k8s-pod-network.7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Workload="localhost-k8s-csi--node--driver--w2smd-eth0" Jan 23 18:43:54.495215 containerd[1598]: 2026-01-23 18:43:54.460 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Namespace="calico-system" Pod="csi-node-driver-w2smd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w2smd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w2smd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f72bd6e0-6290-4ad0-99d3-a580eaff8fda", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w2smd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5393cac5de3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:54.495383 containerd[1598]: 2026-01-23 18:43:54.460 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Namespace="calico-system" Pod="csi-node-driver-w2smd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w2smd-eth0" Jan 23 18:43:54.495383 containerd[1598]: 2026-01-23 18:43:54.461 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5393cac5de3 ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Namespace="calico-system" Pod="csi-node-driver-w2smd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w2smd-eth0" Jan 23 18:43:54.495383 containerd[1598]: 2026-01-23 18:43:54.466 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Namespace="calico-system" Pod="csi-node-driver-w2smd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w2smd-eth0" Jan 23 18:43:54.495455 containerd[1598]: 2026-01-23 18:43:54.467 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Namespace="calico-system" Pod="csi-node-driver-w2smd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w2smd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w2smd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f72bd6e0-6290-4ad0-99d3-a580eaff8fda", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e", Pod:"csi-node-driver-w2smd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5393cac5de3", MAC:"0a:42:9a:7b:66:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:54.495523 containerd[1598]: 2026-01-23 18:43:54.486 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" Namespace="calico-system" Pod="csi-node-driver-w2smd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w2smd-eth0" Jan 23 18:43:54.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.508841 kernel: audit: type=1300 audit(1769193834.475:655): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.508888 kernel: audit: type=1327 audit(1769193834.475:655): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.509091 kernel: audit: type=1334 audit(1769193834.475:656): prog-id=212 op=UNLOAD Jan 23 18:43:54.475000 audit: BPF prog-id=212 op=UNLOAD Jan 23 18:43:54.475000 audit[4444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.521701 kernel: audit: type=1300 audit(1769193834.475:656): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.533304 kernel: audit: type=1327 audit(1769193834.475:656): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.475000 audit: BPF prog-id=213 op=LOAD Jan 23 18:43:54.475000 audit[4444]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.475000 audit: BPF prog-id=214 op=LOAD Jan 23 18:43:54.475000 audit[4444]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.476000 audit: BPF prog-id=214 op=UNLOAD Jan 23 18:43:54.476000 audit[4444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.476000 audit: BPF prog-id=213 op=UNLOAD Jan 23 18:43:54.476000 audit[4444]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.476000 audit: BPF prog-id=215 op=LOAD Jan 23 18:43:54.476000 audit[4444]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4432 pid=4444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613763626162643364336639616231626234356565303739656661 Jan 23 18:43:54.506000 audit[4469]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4469 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:54.506000 audit[4469]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffd3e104250 a2=0 a3=7ffd3e10423c items=0 ppid=4147 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.506000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:54.542048 containerd[1598]: time="2026-01-23T18:43:54.541927090Z" level=info msg="connecting to shim 7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e" address="unix:///run/containerd/s/2635bf38185f21559da3aa176375de05ee796d3a8af6b079bc0f59300a09d5d1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:54.554755 containerd[1598]: time="2026-01-23T18:43:54.554696126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-276fc,Uid:72e54e47-91e4-415c-876e-aa36180ac3b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1a7cbabd3d3f9ab1bb45ee079efa28fb17a56c1d29b12e258fe0410ce247d71\"" Jan 23 18:43:54.556849 containerd[1598]: time="2026-01-23T18:43:54.556787003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:43:54.598568 systemd[1]: Started cri-containerd-7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e.scope - libcontainer container 7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e. Jan 23 18:43:54.614000 audit: BPF prog-id=216 op=LOAD Jan 23 18:43:54.614000 audit: BPF prog-id=217 op=LOAD Jan 23 18:43:54.614000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4479 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393265633838663262666665386632303534343862333835366137 Jan 23 18:43:54.614000 audit: BPF prog-id=217 op=UNLOAD Jan 23 18:43:54.614000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393265633838663262666665386632303534343862333835366137 Jan 23 18:43:54.615000 audit: BPF prog-id=218 op=LOAD Jan 23 18:43:54.615000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4479 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393265633838663262666665386632303534343862333835366137 Jan 23 18:43:54.615000 audit: BPF prog-id=219 op=LOAD Jan 23 18:43:54.615000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4479 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393265633838663262666665386632303534343862333835366137 Jan 23 18:43:54.615000 audit: BPF prog-id=219 op=UNLOAD Jan 23 18:43:54.615000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393265633838663262666665386632303534343862333835366137 Jan 23 18:43:54.615000 audit: BPF prog-id=218 op=UNLOAD Jan 23 18:43:54.615000 audit[4495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393265633838663262666665386632303534343862333835366137 Jan 23 18:43:54.615000 audit: BPF prog-id=220 op=LOAD Jan 23 18:43:54.615000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4479 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:54.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736393265633838663262666665386632303534343862333835366137 Jan 23 18:43:54.617409 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:54.638577 containerd[1598]: time="2026-01-23T18:43:54.638427119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2smd,Uid:f72bd6e0-6290-4ad0-99d3-a580eaff8fda,Namespace:calico-system,Attempt:0,} returns sandbox id \"7692ec88f2bffe8f205448b3856a716f2d636132b3f97545bc26accd7aa7d46e\"" Jan 23 18:43:54.657569 containerd[1598]: time="2026-01-23T18:43:54.657484174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:54.659053 containerd[1598]: time="2026-01-23T18:43:54.659018427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:43:54.659167 containerd[1598]: time="2026-01-23T18:43:54.659090741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:54.659330 kubelet[2814]: E0123 18:43:54.659240 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:43:54.659782 kubelet[2814]: E0123 18:43:54.659708 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:43:54.660047 kubelet[2814]: E0123 18:43:54.659961 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp2lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-276fc_calico-system(72e54e47-91e4-415c-876e-aa36180ac3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:54.661360 containerd[1598]: time="2026-01-23T18:43:54.660225258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:43:54.661882 kubelet[2814]: E0123 18:43:54.661839 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:43:54.741930 containerd[1598]: time="2026-01-23T18:43:54.741807041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:54.743359 containerd[1598]: time="2026-01-23T18:43:54.743232824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:43:54.743472 containerd[1598]: time="2026-01-23T18:43:54.743368575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:54.743673 kubelet[2814]: E0123 18:43:54.743590 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:43:54.743768 kubelet[2814]: E0123 18:43:54.743670 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:43:54.743913 kubelet[2814]: E0123 18:43:54.743850 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j2zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2smd_calico-system(f72bd6e0-6290-4ad0-99d3-a580eaff8fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:54.746385 containerd[1598]: time="2026-01-23T18:43:54.746351833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:43:54.806077 containerd[1598]: time="2026-01-23T18:43:54.805943895Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:54.807770 containerd[1598]: time="2026-01-23T18:43:54.807687262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:43:54.807770 containerd[1598]: time="2026-01-23T18:43:54.807728202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:54.808029 kubelet[2814]: E0123 18:43:54.807956 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:43:54.808157 kubelet[2814]: E0123 18:43:54.808029 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:43:54.808406 kubelet[2814]: E0123 18:43:54.808211 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j2zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2smd_calico-system(f72bd6e0-6290-4ad0-99d3-a580eaff8fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:54.809705 kubelet[2814]: E0123 18:43:54.809637 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:43:55.124669 kubelet[2814]: E0123 18:43:55.124607 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:55.125449 containerd[1598]: time="2026-01-23T18:43:55.125347838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9vpfw,Uid:33b60d66-70dc-47d9-aa85-505e7fd31a2d,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:55.265896 systemd-networkd[1511]: cali579c0a94f6a: Link UP Jan 23 18:43:55.266814 systemd-networkd[1511]: cali579c0a94f6a: Gained carrier Jan 23 18:43:55.286954 containerd[1598]: 2026-01-23 18:43:55.183 [INFO][4521] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0 coredns-668d6bf9bc- kube-system 33b60d66-70dc-47d9-aa85-505e7fd31a2d 885 0 2026-01-23 18:43:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9vpfw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali579c0a94f6a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Namespace="kube-system" Pod="coredns-668d6bf9bc-9vpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9vpfw-" Jan 23 18:43:55.286954 containerd[1598]: 2026-01-23 18:43:55.183 [INFO][4521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Namespace="kube-system" Pod="coredns-668d6bf9bc-9vpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" Jan 23 18:43:55.286954 containerd[1598]: 2026-01-23 18:43:55.216 [INFO][4535] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" HandleID="k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Workload="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.217 [INFO][4535] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" HandleID="k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Workload="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042a520), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9vpfw", "timestamp":"2026-01-23 18:43:55.216985768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.217 [INFO][4535] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.217 [INFO][4535] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.217 [INFO][4535] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.224 [INFO][4535] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" host="localhost" Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.231 [INFO][4535] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.239 [INFO][4535] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.241 [INFO][4535] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.243 [INFO][4535] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:55.288715 containerd[1598]: 2026-01-23 18:43:55.243 [INFO][4535] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" host="localhost" Jan 23 18:43:55.289228 containerd[1598]: 2026-01-23 18:43:55.245 [INFO][4535] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5 Jan 23 18:43:55.289228 containerd[1598]: 2026-01-23 18:43:55.250 [INFO][4535] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" host="localhost" Jan 23 18:43:55.289228 containerd[1598]: 2026-01-23 18:43:55.258 [INFO][4535] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" host="localhost" Jan 23 18:43:55.289228 containerd[1598]: 2026-01-23 18:43:55.258 [INFO][4535] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" host="localhost" Jan 23 18:43:55.289228 containerd[1598]: 2026-01-23 18:43:55.259 [INFO][4535] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:55.289228 containerd[1598]: 2026-01-23 18:43:55.259 [INFO][4535] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" HandleID="k8s-pod-network.500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Workload="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" Jan 23 18:43:55.289460 containerd[1598]: 2026-01-23 18:43:55.262 [INFO][4521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Namespace="kube-system" Pod="coredns-668d6bf9bc-9vpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"33b60d66-70dc-47d9-aa85-505e7fd31a2d", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9vpfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali579c0a94f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:55.289642 containerd[1598]: 2026-01-23 18:43:55.262 [INFO][4521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Namespace="kube-system" Pod="coredns-668d6bf9bc-9vpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" Jan 23 18:43:55.289642 containerd[1598]: 2026-01-23 18:43:55.262 [INFO][4521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali579c0a94f6a ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Namespace="kube-system" Pod="coredns-668d6bf9bc-9vpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" Jan 23 18:43:55.289642 containerd[1598]: 2026-01-23 18:43:55.267 [INFO][4521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Namespace="kube-system" Pod="coredns-668d6bf9bc-9vpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" Jan 23 18:43:55.289745 containerd[1598]: 2026-01-23 18:43:55.269 [INFO][4521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Namespace="kube-system" Pod="coredns-668d6bf9bc-9vpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"33b60d66-70dc-47d9-aa85-505e7fd31a2d", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5", Pod:"coredns-668d6bf9bc-9vpfw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali579c0a94f6a", MAC:"da:7b:79:43:4d:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:55.289745 containerd[1598]: 2026-01-23 18:43:55.282 [INFO][4521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" Namespace="kube-system" Pod="coredns-668d6bf9bc-9vpfw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9vpfw-eth0" Jan 23 18:43:55.308000 audit[4554]: NETFILTER_CFG table=filter:129 family=2 entries=56 op=nft_register_chain pid=4554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:55.308000 audit[4554]: SYSCALL arch=c000003e syscall=46 success=yes exit=27780 a0=3 a1=7ffe4c103e70 a2=0 a3=7ffe4c103e5c items=0 ppid=4147 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.308000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:55.316013 containerd[1598]: time="2026-01-23T18:43:55.315969196Z" level=info msg="connecting to shim 500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5" address="unix:///run/containerd/s/88e82ffaea3aeb4449b7c5c64325f2c4463a777684bffff335e155d38cb2f422" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:55.349567 systemd[1]: Started cri-containerd-500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5.scope - libcontainer container 500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5. Jan 23 18:43:55.366000 audit: BPF prog-id=221 op=LOAD Jan 23 18:43:55.367000 audit: BPF prog-id=222 op=LOAD Jan 23 18:43:55.367000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4563 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530306538333130636230643732303261396330386163373533353039 Jan 23 18:43:55.367000 audit: BPF prog-id=222 op=UNLOAD Jan 23 18:43:55.367000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530306538333130636230643732303261396330386163373533353039 Jan 23 18:43:55.368000 audit: BPF prog-id=223 op=LOAD Jan 23 18:43:55.368000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4563 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530306538333130636230643732303261396330386163373533353039 Jan 23 18:43:55.368000 audit: BPF prog-id=224 op=LOAD Jan 23 18:43:55.368000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4563 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530306538333130636230643732303261396330386163373533353039 Jan 23 18:43:55.368000 audit: BPF prog-id=224 op=UNLOAD Jan 23 18:43:55.368000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530306538333130636230643732303261396330386163373533353039 Jan 23 18:43:55.368000 audit: BPF prog-id=223 op=UNLOAD Jan 23 18:43:55.368000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530306538333130636230643732303261396330386163373533353039 Jan 23 18:43:55.368000 audit: BPF prog-id=225 op=LOAD Jan 23 18:43:55.368000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4563 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530306538333130636230643732303261396330386163373533353039 Jan 23 18:43:55.372253 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:55.372576 kubelet[2814]: E0123 18:43:55.372536 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:43:55.374062 kubelet[2814]: E0123 18:43:55.373886 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:43:55.403000 audit[4597]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:55.403000 audit[4597]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0b432490 a2=0 a3=7ffe0b43247c items=0 ppid=2926 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:55.410000 audit[4597]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:55.410000 audit[4597]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe0b432490 a2=0 a3=0 items=0 ppid=2926 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:55.423584 containerd[1598]: time="2026-01-23T18:43:55.423499253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9vpfw,Uid:33b60d66-70dc-47d9-aa85-505e7fd31a2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5\"" Jan 23 18:43:55.424855 kubelet[2814]: E0123 18:43:55.424668 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:55.427634 containerd[1598]: time="2026-01-23T18:43:55.427564705Z" level=info msg="CreateContainer within sandbox \"500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:43:55.445525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681569187.mount: Deactivated successfully. Jan 23 18:43:55.445924 containerd[1598]: time="2026-01-23T18:43:55.445864540Z" level=info msg="Container 7a59b06c7a1b077ce9bfeea48b1a4ac572e07289ccffe0073efece8e063bc4bd: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:55.452543 containerd[1598]: time="2026-01-23T18:43:55.452453678Z" level=info msg="CreateContainer within sandbox \"500e8310cb0d7202a9c08ac753509f7b841de3097a886216c2d309dbd4211ee5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a59b06c7a1b077ce9bfeea48b1a4ac572e07289ccffe0073efece8e063bc4bd\"" Jan 23 18:43:55.453055 containerd[1598]: time="2026-01-23T18:43:55.452979248Z" level=info msg="StartContainer for \"7a59b06c7a1b077ce9bfeea48b1a4ac572e07289ccffe0073efece8e063bc4bd\"" Jan 23 18:43:55.454138 containerd[1598]: time="2026-01-23T18:43:55.454058705Z" level=info msg="connecting to shim 7a59b06c7a1b077ce9bfeea48b1a4ac572e07289ccffe0073efece8e063bc4bd" address="unix:///run/containerd/s/88e82ffaea3aeb4449b7c5c64325f2c4463a777684bffff335e155d38cb2f422" protocol=ttrpc version=3 Jan 23 18:43:55.486472 systemd[1]: Started cri-containerd-7a59b06c7a1b077ce9bfeea48b1a4ac572e07289ccffe0073efece8e063bc4bd.scope - libcontainer container 7a59b06c7a1b077ce9bfeea48b1a4ac572e07289ccffe0073efece8e063bc4bd. Jan 23 18:43:55.505000 audit: BPF prog-id=226 op=LOAD Jan 23 18:43:55.506000 audit: BPF prog-id=227 op=LOAD Jan 23 18:43:55.506000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4563 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353962303663376131623037376365396266656561343862316134 Jan 23 18:43:55.506000 audit: BPF prog-id=227 op=UNLOAD Jan 23 18:43:55.506000 audit[4605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353962303663376131623037376365396266656561343862316134 Jan 23 18:43:55.506000 audit: BPF prog-id=228 op=LOAD Jan 23 18:43:55.506000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4563 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353962303663376131623037376365396266656561343862316134 Jan 23 18:43:55.506000 audit: BPF prog-id=229 op=LOAD Jan 23 18:43:55.506000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4563 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353962303663376131623037376365396266656561343862316134 Jan 23 18:43:55.506000 audit: BPF prog-id=229 op=UNLOAD Jan 23 18:43:55.506000 audit[4605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353962303663376131623037376365396266656561343862316134 Jan 23 18:43:55.506000 audit: BPF prog-id=228 op=UNLOAD Jan 23 18:43:55.506000 audit[4605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353962303663376131623037376365396266656561343862316134 Jan 23 18:43:55.507000 audit: BPF prog-id=230 op=LOAD Jan 23 18:43:55.507000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4563 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:55.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761353962303663376131623037376365396266656561343862316134 Jan 23 18:43:55.531655 containerd[1598]: time="2026-01-23T18:43:55.531529079Z" level=info msg="StartContainer for \"7a59b06c7a1b077ce9bfeea48b1a4ac572e07289ccffe0073efece8e063bc4bd\" returns successfully" Jan 23 18:43:55.563684 systemd-networkd[1511]: calib11f937037b: Gained IPv6LL Jan 23 18:43:55.947568 systemd-networkd[1511]: cali5393cac5de3: Gained IPv6LL Jan 23 18:43:56.130623 containerd[1598]: time="2026-01-23T18:43:56.130491943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bdcd99c5b-6vx2x,Uid:c6f4bf65-2b8c-4712-a434-da7d69d938c0,Namespace:calico-system,Attempt:0,}" Jan 23 18:43:56.305748 systemd-networkd[1511]: cali0155be4db00: Link UP Jan 23 18:43:56.306607 systemd-networkd[1511]: cali0155be4db00: Gained carrier Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.205 [INFO][4638] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0 calico-kube-controllers-5bdcd99c5b- calico-system c6f4bf65-2b8c-4712-a434-da7d69d938c0 874 0 2026-01-23 18:43:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bdcd99c5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5bdcd99c5b-6vx2x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0155be4db00 [] [] }} ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Namespace="calico-system" Pod="calico-kube-controllers-5bdcd99c5b-6vx2x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.205 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Namespace="calico-system" Pod="calico-kube-controllers-5bdcd99c5b-6vx2x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.249 [INFO][4652] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" HandleID="k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Workload="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.250 [INFO][4652] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" HandleID="k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Workload="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5700), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5bdcd99c5b-6vx2x", "timestamp":"2026-01-23 18:43:56.249712421 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.250 [INFO][4652] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.250 [INFO][4652] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.250 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.258 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.266 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.273 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.276 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.280 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.280 [INFO][4652] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.283 [INFO][4652] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.289 [INFO][4652] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.298 [INFO][4652] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.298 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" host="localhost" Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.298 [INFO][4652] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:56.329712 containerd[1598]: 2026-01-23 18:43:56.298 [INFO][4652] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" HandleID="k8s-pod-network.ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Workload="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" Jan 23 18:43:56.331755 containerd[1598]: 2026-01-23 18:43:56.302 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Namespace="calico-system" Pod="calico-kube-controllers-5bdcd99c5b-6vx2x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0", GenerateName:"calico-kube-controllers-5bdcd99c5b-", Namespace:"calico-system", SelfLink:"", UID:"c6f4bf65-2b8c-4712-a434-da7d69d938c0", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bdcd99c5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5bdcd99c5b-6vx2x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0155be4db00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:56.331755 containerd[1598]: 2026-01-23 18:43:56.302 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Namespace="calico-system" Pod="calico-kube-controllers-5bdcd99c5b-6vx2x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" Jan 23 18:43:56.331755 containerd[1598]: 2026-01-23 18:43:56.302 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0155be4db00 ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Namespace="calico-system" Pod="calico-kube-controllers-5bdcd99c5b-6vx2x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" Jan 23 18:43:56.331755 containerd[1598]: 2026-01-23 18:43:56.307 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Namespace="calico-system" Pod="calico-kube-controllers-5bdcd99c5b-6vx2x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" Jan 23 18:43:56.331755 containerd[1598]: 2026-01-23 18:43:56.307 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Namespace="calico-system" Pod="calico-kube-controllers-5bdcd99c5b-6vx2x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0", GenerateName:"calico-kube-controllers-5bdcd99c5b-", Namespace:"calico-system", SelfLink:"", UID:"c6f4bf65-2b8c-4712-a434-da7d69d938c0", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bdcd99c5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca", Pod:"calico-kube-controllers-5bdcd99c5b-6vx2x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0155be4db00", MAC:"8a:6b:c9:92:58:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:56.331755 containerd[1598]: 2026-01-23 18:43:56.325 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" Namespace="calico-system" Pod="calico-kube-controllers-5bdcd99c5b-6vx2x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5bdcd99c5b--6vx2x-eth0" Jan 23 18:43:56.418000 audit[4669]: NETFILTER_CFG table=filter:132 family=2 entries=44 op=nft_register_chain pid=4669 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:56.418000 audit[4669]: SYSCALL arch=c000003e syscall=46 success=yes exit=21936 a0=3 a1=7ffc069e82a0 a2=0 a3=7ffc069e828c items=0 ppid=4147 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.418000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:56.443332 kubelet[2814]: E0123 18:43:56.442160 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:56.445685 kubelet[2814]: E0123 18:43:56.444885 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:43:56.445685 kubelet[2814]: E0123 18:43:56.445237 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:43:56.518197 containerd[1598]: time="2026-01-23T18:43:56.518081434Z" level=info msg="connecting to shim ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca" address="unix:///run/containerd/s/99298cda2ee03016b2d5e1041102fd9667029513db4ad31f2d3bc79f513f0cb9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:56.526472 systemd-networkd[1511]: cali579c0a94f6a: Gained IPv6LL Jan 23 18:43:56.558590 kubelet[2814]: I0123 18:43:56.556855 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9vpfw" podStartSLOduration=37.556827756 podStartE2EDuration="37.556827756s" podCreationTimestamp="2026-01-23 18:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:43:56.533438352 +0000 UTC m=+40.671007515" watchObservedRunningTime="2026-01-23 18:43:56.556827756 +0000 UTC m=+40.694396939" Jan 23 18:43:56.598000 audit[4693]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:56.598000 audit[4693]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff4b8f5db0 a2=0 a3=7fff4b8f5d9c items=0 ppid=2926 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:56.606000 audit[4693]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:56.606000 audit[4693]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff4b8f5db0 a2=0 a3=0 items=0 ppid=2926 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.606000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:56.640814 systemd[1]: Started cri-containerd-ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca.scope - libcontainer container ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca. Jan 23 18:43:56.652000 audit[4708]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:56.652000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7a368310 a2=0 a3=7fff7a3682fc items=0 ppid=2926 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:56.662000 audit: BPF prog-id=231 op=LOAD Jan 23 18:43:56.663000 audit: BPF prog-id=232 op=LOAD Jan 23 18:43:56.663000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4680 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643363353334646162646232663364343535633437376562393365 Jan 23 18:43:56.664000 audit: BPF prog-id=232 op=UNLOAD Jan 23 18:43:56.664000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4680 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643363353334646162646232663364343535633437376562393365 Jan 23 18:43:56.664000 audit: BPF prog-id=233 op=LOAD Jan 23 18:43:56.664000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4680 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643363353334646162646232663364343535633437376562393365 Jan 23 18:43:56.664000 audit: BPF prog-id=234 op=LOAD Jan 23 18:43:56.664000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4680 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643363353334646162646232663364343535633437376562393365 Jan 23 18:43:56.664000 audit: BPF prog-id=234 op=UNLOAD Jan 23 18:43:56.664000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4680 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643363353334646162646232663364343535633437376562393365 Jan 23 18:43:56.664000 audit: BPF prog-id=233 op=UNLOAD Jan 23 18:43:56.664000 audit[4691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4680 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643363353334646162646232663364343535633437376562393365 Jan 23 18:43:56.664000 audit: BPF prog-id=235 op=LOAD Jan 23 18:43:56.664000 audit[4691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4680 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465643363353334646162646232663364343535633437376562393365 Jan 23 18:43:56.667742 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:56.673000 audit[4708]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:56.673000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff7a368310 a2=0 a3=7fff7a3682fc items=0 ppid=2926 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:56.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:56.738683 containerd[1598]: time="2026-01-23T18:43:56.738550327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bdcd99c5b-6vx2x,Uid:c6f4bf65-2b8c-4712-a434-da7d69d938c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"ded3c534dabdb2f3d455c477eb93ec9010aa967754d10f6ac511dfc2668881ca\"" Jan 23 18:43:56.741953 containerd[1598]: time="2026-01-23T18:43:56.741703109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:43:56.823373 containerd[1598]: time="2026-01-23T18:43:56.823148608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:56.825475 containerd[1598]: time="2026-01-23T18:43:56.825201309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:43:56.825475 containerd[1598]: time="2026-01-23T18:43:56.825339900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:56.825708 kubelet[2814]: E0123 18:43:56.825627 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:43:56.825708 kubelet[2814]: E0123 18:43:56.825688 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:43:56.825957 kubelet[2814]: E0123 18:43:56.825846 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2dlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bdcd99c5b-6vx2x_calico-system(c6f4bf65-2b8c-4712-a434-da7d69d938c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:56.827396 kubelet[2814]: E0123 18:43:56.827338 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:43:57.127234 kubelet[2814]: E0123 18:43:57.126347 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:57.127928 containerd[1598]: time="2026-01-23T18:43:57.127885887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5dcz,Uid:ac222387-3b7e-4f68-972a-ec412c252e8d,Namespace:kube-system,Attempt:0,}" Jan 23 18:43:57.128505 containerd[1598]: time="2026-01-23T18:43:57.128445156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56878495cb-jls4r,Uid:2647b35f-a248-488d-8f41-2052dd32f727,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:43:57.128581 containerd[1598]: time="2026-01-23T18:43:57.128560809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56878495cb-t9bs5,Uid:50725488-4a1d-4f65-a7da-a4a923730733,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:43:57.365531 systemd-networkd[1511]: cali1d0787a3d24: Link UP Jan 23 18:43:57.385311 systemd-networkd[1511]: cali1d0787a3d24: Gained carrier Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.230 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0 coredns-668d6bf9bc- kube-system ac222387-3b7e-4f68-972a-ec412c252e8d 884 0 2026-01-23 18:43:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-p5dcz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d0787a3d24 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5dcz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5dcz-" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.232 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5dcz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.287 [INFO][4778] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" HandleID="k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Workload="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.287 [INFO][4778] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" HandleID="k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Workload="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000328200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-p5dcz", "timestamp":"2026-01-23 18:43:57.287681966 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.288 [INFO][4778] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.288 [INFO][4778] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.288 [INFO][4778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.296 [INFO][4778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.306 [INFO][4778] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.315 [INFO][4778] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.320 [INFO][4778] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.326 [INFO][4778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.326 [INFO][4778] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.330 [INFO][4778] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.338 [INFO][4778] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.349 [INFO][4778] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.349 [INFO][4778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" host="localhost" Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.349 [INFO][4778] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:57.408405 containerd[1598]: 2026-01-23 18:43:57.349 [INFO][4778] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" HandleID="k8s-pod-network.7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Workload="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" Jan 23 18:43:57.410162 containerd[1598]: 2026-01-23 18:43:57.355 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5dcz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac222387-3b7e-4f68-972a-ec412c252e8d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-p5dcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d0787a3d24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:57.410162 containerd[1598]: 2026-01-23 18:43:57.355 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5dcz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" Jan 23 18:43:57.410162 containerd[1598]: 2026-01-23 18:43:57.355 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d0787a3d24 ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5dcz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" Jan 23 18:43:57.410162 containerd[1598]: 2026-01-23 18:43:57.384 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5dcz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" Jan 23 18:43:57.410162 containerd[1598]: 2026-01-23 18:43:57.385 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5dcz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac222387-3b7e-4f68-972a-ec412c252e8d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b", Pod:"coredns-668d6bf9bc-p5dcz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d0787a3d24", MAC:"52:c5:2c:e3:38:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:57.410162 containerd[1598]: 2026-01-23 18:43:57.404 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5dcz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p5dcz-eth0" Jan 23 18:43:57.450853 kubelet[2814]: E0123 18:43:57.450681 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:57.458683 kubelet[2814]: E0123 18:43:57.458619 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:43:57.472000 audit[4811]: NETFILTER_CFG table=filter:137 family=2 entries=50 op=nft_register_chain pid=4811 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:57.472000 audit[4811]: SYSCALL arch=c000003e syscall=46 success=yes exit=24368 a0=3 a1=7ffc292c91f0 a2=0 a3=7ffc292c91dc items=0 ppid=4147 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.472000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:57.510646 containerd[1598]: time="2026-01-23T18:43:57.510472740Z" level=info msg="connecting to shim 7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b" address="unix:///run/containerd/s/cdc183e4251e47de01077c60781275199db0192be676e186a024d9fcfc9f254b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:57.525226 systemd-networkd[1511]: cali837df234ff8: Link UP Jan 23 18:43:57.528063 systemd-networkd[1511]: cali837df234ff8: Gained carrier Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.239 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0 calico-apiserver-56878495cb- calico-apiserver 2647b35f-a248-488d-8f41-2052dd32f727 887 0 2026-01-23 18:43:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56878495cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56878495cb-jls4r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali837df234ff8 [] [] }} ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-jls4r" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--jls4r-" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.239 [INFO][4730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-jls4r" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.289 [INFO][4780] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" HandleID="k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Workload="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.289 [INFO][4780] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" HandleID="k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Workload="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56878495cb-jls4r", "timestamp":"2026-01-23 18:43:57.289815053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.289 [INFO][4780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.350 [INFO][4780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.350 [INFO][4780] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.396 [INFO][4780] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.407 [INFO][4780] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.420 [INFO][4780] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.445 [INFO][4780] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.450 [INFO][4780] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.450 [INFO][4780] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.459 [INFO][4780] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206 Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.484 [INFO][4780] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.505 [INFO][4780] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.505 [INFO][4780] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" host="localhost" Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.506 [INFO][4780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:57.561871 containerd[1598]: 2026-01-23 18:43:57.506 [INFO][4780] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" HandleID="k8s-pod-network.17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Workload="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" Jan 23 18:43:57.564060 containerd[1598]: 2026-01-23 18:43:57.518 [INFO][4730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-jls4r" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0", GenerateName:"calico-apiserver-56878495cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2647b35f-a248-488d-8f41-2052dd32f727", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56878495cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56878495cb-jls4r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali837df234ff8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:57.564060 containerd[1598]: 2026-01-23 18:43:57.518 [INFO][4730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-jls4r" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" Jan 23 18:43:57.564060 containerd[1598]: 2026-01-23 18:43:57.518 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali837df234ff8 ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-jls4r" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" Jan 23 18:43:57.564060 containerd[1598]: 2026-01-23 18:43:57.529 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-jls4r" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" Jan 23 18:43:57.564060 containerd[1598]: 2026-01-23 18:43:57.530 [INFO][4730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-jls4r" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0", GenerateName:"calico-apiserver-56878495cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2647b35f-a248-488d-8f41-2052dd32f727", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56878495cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206", Pod:"calico-apiserver-56878495cb-jls4r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali837df234ff8", MAC:"1a:6b:45:e9:67:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:57.564060 containerd[1598]: 2026-01-23 18:43:57.557 [INFO][4730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-jls4r" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--jls4r-eth0" Jan 23 18:43:57.625000 audit[4853]: NETFILTER_CFG table=filter:138 family=2 entries=62 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:57.625000 audit[4853]: SYSCALL arch=c000003e syscall=46 success=yes exit=31740 a0=3 a1=7fff23604cc0 a2=0 a3=7fff23604cac items=0 ppid=4147 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.625000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:57.629223 systemd[1]: Started cri-containerd-7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b.scope - libcontainer container 7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b. Jan 23 18:43:57.645625 containerd[1598]: time="2026-01-23T18:43:57.644433604Z" level=info msg="connecting to shim 17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206" address="unix:///run/containerd/s/47b07db644ab3337dc43efadf29cadfe373128c4d2fed7d0e37e6fe21fc58be2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:57.659815 systemd-networkd[1511]: cali7b309d5bc3e: Link UP Jan 23 18:43:57.661672 systemd-networkd[1511]: cali7b309d5bc3e: Gained carrier Jan 23 18:43:57.664000 audit: BPF prog-id=236 op=LOAD Jan 23 18:43:57.665000 audit: BPF prog-id=237 op=LOAD Jan 23 18:43:57.665000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4821 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762393361616664653861323062663232306661306333373233343136 Jan 23 18:43:57.665000 audit: BPF prog-id=237 op=UNLOAD Jan 23 18:43:57.665000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4821 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762393361616664653861323062663232306661306333373233343136 Jan 23 18:43:57.665000 audit: BPF prog-id=238 op=LOAD Jan 23 18:43:57.665000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4821 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762393361616664653861323062663232306661306333373233343136 Jan 23 18:43:57.665000 audit: BPF prog-id=239 op=LOAD Jan 23 18:43:57.665000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4821 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762393361616664653861323062663232306661306333373233343136 Jan 23 18:43:57.666000 audit: BPF prog-id=239 op=UNLOAD Jan 23 18:43:57.666000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4821 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762393361616664653861323062663232306661306333373233343136 Jan 23 18:43:57.666000 audit: BPF prog-id=238 op=UNLOAD Jan 23 18:43:57.666000 audit[4833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4821 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762393361616664653861323062663232306661306333373233343136 Jan 23 18:43:57.667000 audit: BPF prog-id=240 op=LOAD Jan 23 18:43:57.667000 audit[4833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4821 pid=4833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762393361616664653861323062663232306661306333373233343136 Jan 23 18:43:57.681483 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:57.696016 systemd[1]: Started cri-containerd-17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206.scope - libcontainer container 17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206. Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.258 [INFO][4753] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0 calico-apiserver-56878495cb- calico-apiserver 50725488-4a1d-4f65-a7da-a4a923730733 889 0 2026-01-23 18:43:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56878495cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56878495cb-t9bs5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b309d5bc3e [] [] }} ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-t9bs5" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--t9bs5-" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.258 [INFO][4753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-t9bs5" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.313 [INFO][4790] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" HandleID="k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Workload="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.313 [INFO][4790] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" HandleID="k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Workload="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56878495cb-t9bs5", "timestamp":"2026-01-23 18:43:57.313496818 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.313 [INFO][4790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.506 [INFO][4790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.507 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.528 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.543 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.563 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.572 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.597 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.597 [INFO][4790] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.622 [INFO][4790] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.632 [INFO][4790] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.650 [INFO][4790] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.650 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" host="localhost" Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.650 [INFO][4790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:57.699595 containerd[1598]: 2026-01-23 18:43:57.650 [INFO][4790] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" HandleID="k8s-pod-network.9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Workload="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" Jan 23 18:43:57.701357 containerd[1598]: 2026-01-23 18:43:57.656 [INFO][4753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-t9bs5" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0", GenerateName:"calico-apiserver-56878495cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"50725488-4a1d-4f65-a7da-a4a923730733", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56878495cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56878495cb-t9bs5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b309d5bc3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:57.701357 containerd[1598]: 2026-01-23 18:43:57.656 [INFO][4753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-t9bs5" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" Jan 23 18:43:57.701357 containerd[1598]: 2026-01-23 18:43:57.656 [INFO][4753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b309d5bc3e ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-t9bs5" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" Jan 23 18:43:57.701357 containerd[1598]: 2026-01-23 18:43:57.662 [INFO][4753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-t9bs5" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" Jan 23 18:43:57.701357 containerd[1598]: 2026-01-23 18:43:57.663 [INFO][4753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-t9bs5" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0", GenerateName:"calico-apiserver-56878495cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"50725488-4a1d-4f65-a7da-a4a923730733", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56878495cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb", Pod:"calico-apiserver-56878495cb-t9bs5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b309d5bc3e", MAC:"c6:fc:27:5c:79:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:57.701357 containerd[1598]: 2026-01-23 18:43:57.692 [INFO][4753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" Namespace="calico-apiserver" Pod="calico-apiserver-56878495cb-t9bs5" WorkloadEndpoint="localhost-k8s-calico--apiserver--56878495cb--t9bs5-eth0" Jan 23 18:43:57.721000 audit: BPF prog-id=241 op=LOAD Jan 23 18:43:57.722000 audit: BPF prog-id=242 op=LOAD Jan 23 18:43:57.722000 audit[4882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4865 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137636563633830623932373135626530333863656164393731393437 Jan 23 18:43:57.722000 audit: BPF prog-id=242 op=UNLOAD Jan 23 18:43:57.722000 audit[4882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4865 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137636563633830623932373135626530333863656164393731393437 Jan 23 18:43:57.722000 audit: BPF prog-id=243 op=LOAD Jan 23 18:43:57.722000 audit[4882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4865 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137636563633830623932373135626530333863656164393731393437 Jan 23 18:43:57.722000 audit: BPF prog-id=244 op=LOAD Jan 23 18:43:57.722000 audit[4882]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4865 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137636563633830623932373135626530333863656164393731393437 Jan 23 18:43:57.722000 audit: BPF prog-id=244 op=UNLOAD Jan 23 18:43:57.722000 audit[4882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4865 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137636563633830623932373135626530333863656164393731393437 Jan 23 18:43:57.724000 audit: BPF prog-id=243 op=UNLOAD Jan 23 18:43:57.724000 audit[4882]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4865 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137636563633830623932373135626530333863656164393731393437 Jan 23 18:43:57.724000 audit: BPF prog-id=245 op=LOAD Jan 23 18:43:57.724000 audit[4882]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4865 pid=4882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137636563633830623932373135626530333863656164393731393437 Jan 23 18:43:57.728467 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:57.731000 audit[4912]: NETFILTER_CFG table=filter:139 family=2 entries=53 op=nft_register_chain pid=4912 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:57.731000 audit[4912]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7ffd998092b0 a2=0 a3=7ffd9980929c items=0 ppid=4147 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.731000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:57.747691 containerd[1598]: time="2026-01-23T18:43:57.747559293Z" level=info msg="connecting to shim 9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb" address="unix:///run/containerd/s/8632186a6ed488187bc28974f926502a28eda1355c701946301c75d20f029211" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:57.764657 containerd[1598]: time="2026-01-23T18:43:57.764535642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5dcz,Uid:ac222387-3b7e-4f68-972a-ec412c252e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b\"" Jan 23 18:43:57.765618 kubelet[2814]: E0123 18:43:57.765559 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:57.770984 containerd[1598]: time="2026-01-23T18:43:57.770792295Z" level=info msg="CreateContainer within sandbox \"7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:43:57.805690 containerd[1598]: time="2026-01-23T18:43:57.805612941Z" level=info msg="Container fae20eb5ec7089dbe40475c59f1916df7637d70c0cc7167db35c0025d8f2c5cf: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:43:57.813654 systemd[1]: Started cri-containerd-9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb.scope - libcontainer container 9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb. Jan 23 18:43:57.816072 containerd[1598]: time="2026-01-23T18:43:57.815633512Z" level=info msg="CreateContainer within sandbox \"7b93aafde8a20bf220fa0c372341650de81b8617812ab394267a9ad2e3f68c7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fae20eb5ec7089dbe40475c59f1916df7637d70c0cc7167db35c0025d8f2c5cf\"" Jan 23 18:43:57.819889 containerd[1598]: time="2026-01-23T18:43:57.819662148Z" level=info msg="StartContainer for \"fae20eb5ec7089dbe40475c59f1916df7637d70c0cc7167db35c0025d8f2c5cf\"" Jan 23 18:43:57.822974 containerd[1598]: time="2026-01-23T18:43:57.822912675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56878495cb-jls4r,Uid:2647b35f-a248-488d-8f41-2052dd32f727,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"17cecc80b92715be038cead97194733e2790da7afc20106b6a55326f4044b206\"" Jan 23 18:43:57.824229 containerd[1598]: time="2026-01-23T18:43:57.823161714Z" level=info msg="connecting to shim fae20eb5ec7089dbe40475c59f1916df7637d70c0cc7167db35c0025d8f2c5cf" address="unix:///run/containerd/s/cdc183e4251e47de01077c60781275199db0192be676e186a024d9fcfc9f254b" protocol=ttrpc version=3 Jan 23 18:43:57.826398 containerd[1598]: time="2026-01-23T18:43:57.826344075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:43:57.840000 audit: BPF prog-id=246 op=LOAD Jan 23 18:43:57.841000 audit: BPF prog-id=247 op=LOAD Jan 23 18:43:57.841000 audit[4940]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4924 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963326365313532623033373464613937373135323330316361333239 Jan 23 18:43:57.841000 audit: BPF prog-id=247 op=UNLOAD Jan 23 18:43:57.841000 audit[4940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4924 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963326365313532623033373464613937373135323330316361333239 Jan 23 18:43:57.842000 audit: BPF prog-id=248 op=LOAD Jan 23 18:43:57.842000 audit[4940]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4924 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963326365313532623033373464613937373135323330316361333239 Jan 23 18:43:57.842000 audit: BPF prog-id=249 op=LOAD Jan 23 18:43:57.842000 audit[4940]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4924 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963326365313532623033373464613937373135323330316361333239 Jan 23 18:43:57.842000 audit: BPF prog-id=249 op=UNLOAD Jan 23 18:43:57.842000 audit[4940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4924 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963326365313532623033373464613937373135323330316361333239 Jan 23 18:43:57.842000 audit: BPF prog-id=248 op=UNLOAD Jan 23 18:43:57.842000 audit[4940]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4924 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963326365313532623033373464613937373135323330316361333239 Jan 23 18:43:57.842000 audit: BPF prog-id=250 op=LOAD Jan 23 18:43:57.842000 audit[4940]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4924 pid=4940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963326365313532623033373464613937373135323330316361333239 Jan 23 18:43:57.845009 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:57.865918 systemd[1]: Started cri-containerd-fae20eb5ec7089dbe40475c59f1916df7637d70c0cc7167db35c0025d8f2c5cf.scope - libcontainer container fae20eb5ec7089dbe40475c59f1916df7637d70c0cc7167db35c0025d8f2c5cf. Jan 23 18:43:57.901000 audit: BPF prog-id=251 op=LOAD Jan 23 18:43:57.901000 audit: BPF prog-id=252 op=LOAD Jan 23 18:43:57.901000 audit[4967]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4821 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653230656235656337303839646265343034373563353966313931 Jan 23 18:43:57.902000 audit: BPF prog-id=252 op=UNLOAD Jan 23 18:43:57.902000 audit[4967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4821 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653230656235656337303839646265343034373563353966313931 Jan 23 18:43:57.902000 audit: BPF prog-id=253 op=LOAD Jan 23 18:43:57.902000 audit[4967]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4821 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653230656235656337303839646265343034373563353966313931 Jan 23 18:43:57.902000 audit: BPF prog-id=254 op=LOAD Jan 23 18:43:57.902000 audit[4967]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4821 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653230656235656337303839646265343034373563353966313931 Jan 23 18:43:57.902000 audit: BPF prog-id=254 op=UNLOAD Jan 23 18:43:57.902000 audit[4967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4821 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653230656235656337303839646265343034373563353966313931 Jan 23 18:43:57.902000 audit: BPF prog-id=253 op=UNLOAD Jan 23 18:43:57.902000 audit[4967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4821 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653230656235656337303839646265343034373563353966313931 Jan 23 18:43:57.902000 audit: BPF prog-id=255 op=LOAD Jan 23 18:43:57.902000 audit[4967]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4821 pid=4967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:57.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661653230656235656337303839646265343034373563353966313931 Jan 23 18:43:57.915048 containerd[1598]: time="2026-01-23T18:43:57.914791280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56878495cb-t9bs5,Uid:50725488-4a1d-4f65-a7da-a4a923730733,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9c2ce152b0374da977152301ca32934708c3939ad9c6c9251c94e645f59bf2eb\"" Jan 23 18:43:57.922794 containerd[1598]: time="2026-01-23T18:43:57.922665042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:57.924770 containerd[1598]: time="2026-01-23T18:43:57.924729006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:43:57.924770 containerd[1598]: time="2026-01-23T18:43:57.924757376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:57.925342 kubelet[2814]: E0123 18:43:57.925061 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:43:57.925342 kubelet[2814]: E0123 18:43:57.925142 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:43:57.925559 kubelet[2814]: E0123 18:43:57.925478 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g42rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56878495cb-jls4r_calico-apiserver(2647b35f-a248-488d-8f41-2052dd32f727): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:57.926963 kubelet[2814]: E0123 18:43:57.926765 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:43:57.927812 containerd[1598]: time="2026-01-23T18:43:57.927730573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:43:57.933824 containerd[1598]: time="2026-01-23T18:43:57.933718980Z" level=info msg="StartContainer for \"fae20eb5ec7089dbe40475c59f1916df7637d70c0cc7167db35c0025d8f2c5cf\" returns successfully" Jan 23 18:43:58.003884 containerd[1598]: time="2026-01-23T18:43:58.003749387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:58.005681 containerd[1598]: time="2026-01-23T18:43:58.005571396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:43:58.005681 containerd[1598]: time="2026-01-23T18:43:58.005659428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:58.006007 kubelet[2814]: E0123 18:43:58.005845 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:43:58.006007 kubelet[2814]: E0123 18:43:58.005936 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:43:58.006406 kubelet[2814]: E0123 18:43:58.006192 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8l8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56878495cb-t9bs5_calico-apiserver(50725488-4a1d-4f65-a7da-a4a923730733): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:58.007937 kubelet[2814]: E0123 18:43:58.007827 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:43:58.126244 containerd[1598]: time="2026-01-23T18:43:58.126080905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bd45f567-rc4xx,Uid:4067c734-cff1-4419-879a-3fc371d855f2,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:43:58.255805 systemd-networkd[1511]: cali27afd7c79b6: Link UP Jan 23 18:43:58.257595 systemd-networkd[1511]: cali27afd7c79b6: Gained carrier Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.169 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0 calico-apiserver-6bd45f567- calico-apiserver 4067c734-cff1-4419-879a-3fc371d855f2 888 0 2026-01-23 18:43:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bd45f567 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bd45f567-rc4xx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali27afd7c79b6 [] [] }} ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Namespace="calico-apiserver" Pod="calico-apiserver-6bd45f567-rc4xx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.169 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Namespace="calico-apiserver" Pod="calico-apiserver-6bd45f567-rc4xx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.205 [INFO][5021] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" HandleID="k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Workload="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.206 [INFO][5021] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" HandleID="k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Workload="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bd45f567-rc4xx", "timestamp":"2026-01-23 18:43:58.205958966 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.206 [INFO][5021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.206 [INFO][5021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.206 [INFO][5021] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.214 [INFO][5021] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.221 [INFO][5021] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.227 [INFO][5021] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.229 [INFO][5021] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.232 [INFO][5021] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.232 [INFO][5021] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.234 [INFO][5021] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715 Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.240 [INFO][5021] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.248 [INFO][5021] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.248 [INFO][5021] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" host="localhost" Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.248 [INFO][5021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:43:58.302477 containerd[1598]: 2026-01-23 18:43:58.248 [INFO][5021] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" HandleID="k8s-pod-network.f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Workload="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" Jan 23 18:43:58.311535 containerd[1598]: 2026-01-23 18:43:58.252 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Namespace="calico-apiserver" Pod="calico-apiserver-6bd45f567-rc4xx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0", GenerateName:"calico-apiserver-6bd45f567-", Namespace:"calico-apiserver", SelfLink:"", UID:"4067c734-cff1-4419-879a-3fc371d855f2", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bd45f567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bd45f567-rc4xx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27afd7c79b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:58.311535 containerd[1598]: 2026-01-23 18:43:58.252 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Namespace="calico-apiserver" Pod="calico-apiserver-6bd45f567-rc4xx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" Jan 23 18:43:58.311535 containerd[1598]: 2026-01-23 18:43:58.252 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27afd7c79b6 ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Namespace="calico-apiserver" Pod="calico-apiserver-6bd45f567-rc4xx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" Jan 23 18:43:58.311535 containerd[1598]: 2026-01-23 18:43:58.259 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Namespace="calico-apiserver" Pod="calico-apiserver-6bd45f567-rc4xx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" Jan 23 18:43:58.311535 containerd[1598]: 2026-01-23 18:43:58.259 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Namespace="calico-apiserver" Pod="calico-apiserver-6bd45f567-rc4xx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0", GenerateName:"calico-apiserver-6bd45f567-", Namespace:"calico-apiserver", SelfLink:"", UID:"4067c734-cff1-4419-879a-3fc371d855f2", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 43, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bd45f567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715", Pod:"calico-apiserver-6bd45f567-rc4xx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27afd7c79b6", MAC:"9e:dc:c5:ac:ef:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:43:58.311535 containerd[1598]: 2026-01-23 18:43:58.297 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" Namespace="calico-apiserver" Pod="calico-apiserver-6bd45f567-rc4xx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bd45f567--rc4xx-eth0" Jan 23 18:43:58.318810 systemd-networkd[1511]: cali0155be4db00: Gained IPv6LL Jan 23 18:43:58.338000 audit[5038]: NETFILTER_CFG table=filter:140 family=2 entries=57 op=nft_register_chain pid=5038 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 18:43:58.338000 audit[5038]: SYSCALL arch=c000003e syscall=46 success=yes exit=27796 a0=3 a1=7ffcf83723c0 a2=0 a3=7ffcf83723ac items=0 ppid=4147 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.338000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 18:43:58.352085 containerd[1598]: time="2026-01-23T18:43:58.351944694Z" level=info msg="connecting to shim f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715" address="unix:///run/containerd/s/82cc0a42011800100ef95648df4c44dc4755729b72a0698678430985b32f7f5a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:43:58.479906 systemd[1]: Started cri-containerd-f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715.scope - libcontainer container f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715. Jan 23 18:43:58.493363 kubelet[2814]: E0123 18:43:58.493238 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:58.495765 kubelet[2814]: E0123 18:43:58.495724 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:43:58.500135 kubelet[2814]: E0123 18:43:58.500097 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:43:58.500443 kubelet[2814]: E0123 18:43:58.500413 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:43:58.519000 audit: BPF prog-id=256 op=LOAD Jan 23 18:43:58.521577 kubelet[2814]: I0123 18:43:58.521213 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p5dcz" podStartSLOduration=39.521192333 podStartE2EDuration="39.521192333s" podCreationTimestamp="2026-01-23 18:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:43:58.520433235 +0000 UTC m=+42.658002398" watchObservedRunningTime="2026-01-23 18:43:58.521192333 +0000 UTC m=+42.658761496" Jan 23 18:43:58.521000 audit: BPF prog-id=257 op=LOAD Jan 23 18:43:58.521000 audit[5060]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5048 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635613464383164313331336638383565633334356635656461326665 Jan 23 18:43:58.524000 audit: BPF prog-id=257 op=UNLOAD Jan 23 18:43:58.524000 audit[5060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635613464383164313331336638383565633334356635656461326665 Jan 23 18:43:58.525000 audit: BPF prog-id=258 op=LOAD Jan 23 18:43:58.525000 audit[5060]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5048 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635613464383164313331336638383565633334356635656461326665 Jan 23 18:43:58.526000 audit: BPF prog-id=259 op=LOAD Jan 23 18:43:58.526000 audit[5060]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5048 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635613464383164313331336638383565633334356635656461326665 Jan 23 18:43:58.526000 audit: BPF prog-id=259 op=UNLOAD Jan 23 18:43:58.526000 audit[5060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635613464383164313331336638383565633334356635656461326665 Jan 23 18:43:58.526000 audit: BPF prog-id=258 op=UNLOAD Jan 23 18:43:58.526000 audit[5060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5048 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635613464383164313331336638383565633334356635656461326665 Jan 23 18:43:58.526000 audit: BPF prog-id=260 op=LOAD Jan 23 18:43:58.526000 audit[5060]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5048 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635613464383164313331336638383565633334356635656461326665 Jan 23 18:43:58.530079 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 18:43:58.563000 audit[5082]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:58.563000 audit[5082]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd91ceb310 a2=0 a3=7ffd91ceb2fc items=0 ppid=2926 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.563000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:58.571000 audit[5082]: NETFILTER_CFG table=nat:142 family=2 entries=44 op=nft_register_rule pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:58.571000 audit[5082]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd91ceb310 a2=0 a3=7ffd91ceb2fc items=0 ppid=2926 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.571000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:58.636777 containerd[1598]: time="2026-01-23T18:43:58.636639691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bd45f567-rc4xx,Uid:4067c734-cff1-4419-879a-3fc371d855f2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f5a4d81d1313f885ec345f5eda2fed6581de92704e46cd74129904a715f9c715\"" Jan 23 18:43:58.640194 containerd[1598]: time="2026-01-23T18:43:58.640111474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:43:58.642000 audit[5090]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:58.642000 audit[5090]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe832abe10 a2=0 a3=7ffe832abdfc items=0 ppid=2926 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.642000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:58.693000 audit[5090]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5090 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:58.693000 audit[5090]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe832abe10 a2=0 a3=7ffe832abdfc items=0 ppid=2926 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:58.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:58.720710 containerd[1598]: time="2026-01-23T18:43:58.720597771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:43:58.722466 containerd[1598]: time="2026-01-23T18:43:58.722385004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:43:58.722600 containerd[1598]: time="2026-01-23T18:43:58.722473918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:43:58.722944 kubelet[2814]: E0123 18:43:58.722780 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:43:58.723314 kubelet[2814]: E0123 18:43:58.723075 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:43:58.723501 kubelet[2814]: E0123 18:43:58.723420 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjcc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bd45f567-rc4xx_calico-apiserver(4067c734-cff1-4419-879a-3fc371d855f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:43:58.724835 kubelet[2814]: E0123 18:43:58.724799 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:43:58.891637 systemd-networkd[1511]: cali7b309d5bc3e: Gained IPv6LL Jan 23 18:43:59.093596 systemd-networkd[1511]: cali1d0787a3d24: Gained IPv6LL Jan 23 18:43:59.341108 systemd-networkd[1511]: cali837df234ff8: Gained IPv6LL Jan 23 18:43:59.527965 kubelet[2814]: E0123 18:43:59.527423 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:43:59.527965 kubelet[2814]: E0123 18:43:59.527942 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:43:59.529721 kubelet[2814]: E0123 18:43:59.528227 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:43:59.529721 kubelet[2814]: E0123 18:43:59.528926 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:43:59.627000 audit[5093]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:59.636071 kernel: kauditd_printk_skb: 264 callbacks suppressed Jan 23 18:43:59.636188 kernel: audit: type=1325 audit(1769193839.627:751): table=filter:145 family=2 entries=14 op=nft_register_rule pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:59.627000 audit[5093]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7106c7c0 a2=0 a3=7ffc7106c7ac items=0 ppid=2926 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:59.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:59.646336 kernel: audit: type=1300 audit(1769193839.627:751): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7106c7c0 a2=0 a3=7ffc7106c7ac items=0 ppid=2926 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:59.646394 kernel: audit: type=1327 audit(1769193839.627:751): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:59.648000 audit[5093]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:59.651352 kernel: audit: type=1325 audit(1769193839.648:752): table=nat:146 family=2 entries=20 op=nft_register_rule pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:43:59.648000 audit[5093]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc7106c7c0 a2=0 a3=7ffc7106c7ac items=0 ppid=2926 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:59.666135 kernel: audit: type=1300 audit(1769193839.648:752): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc7106c7c0 a2=0 a3=7ffc7106c7ac items=0 ppid=2926 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:43:59.666226 kernel: audit: type=1327 audit(1769193839.648:752): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:43:59.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:44:00.171624 systemd-networkd[1511]: cali27afd7c79b6: Gained IPv6LL Jan 23 18:44:00.529671 kubelet[2814]: E0123 18:44:00.529427 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:00.531208 kubelet[2814]: E0123 18:44:00.531087 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:44:07.125367 containerd[1598]: time="2026-01-23T18:44:07.125304945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:44:07.186027 containerd[1598]: time="2026-01-23T18:44:07.185976489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:07.188180 containerd[1598]: time="2026-01-23T18:44:07.187447496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:44:07.188180 containerd[1598]: time="2026-01-23T18:44:07.187517157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:07.188374 kubelet[2814]: E0123 18:44:07.187710 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:44:07.188374 kubelet[2814]: E0123 18:44:07.187750 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:44:07.188374 kubelet[2814]: E0123 18:44:07.187975 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9ffb2c439af2483eba9ce9f173bc31b4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkf2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d457c8689-kch4w_calico-system(934ceaea-a5ec-4119-99e0-f63128ff37ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:07.188963 containerd[1598]: time="2026-01-23T18:44:07.188346388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:44:07.254814 containerd[1598]: time="2026-01-23T18:44:07.254708372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:07.256231 containerd[1598]: time="2026-01-23T18:44:07.256154924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:44:07.256353 containerd[1598]: time="2026-01-23T18:44:07.256182992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:07.256559 kubelet[2814]: E0123 18:44:07.256498 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:44:07.256621 kubelet[2814]: E0123 18:44:07.256561 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:44:07.257173 kubelet[2814]: E0123 18:44:07.256850 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp2lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-276fc_calico-system(72e54e47-91e4-415c-876e-aa36180ac3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:07.257370 containerd[1598]: time="2026-01-23T18:44:07.256942047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:44:07.258691 kubelet[2814]: E0123 18:44:07.258637 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:44:07.325433 containerd[1598]: time="2026-01-23T18:44:07.325333852Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:07.326777 containerd[1598]: time="2026-01-23T18:44:07.326730352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:44:07.326963 containerd[1598]: time="2026-01-23T18:44:07.326771670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:07.327124 kubelet[2814]: E0123 18:44:07.327043 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:44:07.327124 kubelet[2814]: E0123 18:44:07.327113 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:44:07.327322 kubelet[2814]: E0123 18:44:07.327220 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkf2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d457c8689-kch4w_calico-system(934ceaea-a5ec-4119-99e0-f63128ff37ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:07.329026 kubelet[2814]: E0123 18:44:07.328957 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:44:08.129775 containerd[1598]: time="2026-01-23T18:44:08.129470238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:44:08.186961 containerd[1598]: time="2026-01-23T18:44:08.186864473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:08.188446 containerd[1598]: time="2026-01-23T18:44:08.188378120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:44:08.188519 containerd[1598]: time="2026-01-23T18:44:08.188492451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:08.188706 kubelet[2814]: E0123 18:44:08.188659 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:44:08.189139 kubelet[2814]: E0123 18:44:08.188716 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:44:08.189139 kubelet[2814]: E0123 18:44:08.188928 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j2zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2smd_calico-system(f72bd6e0-6290-4ad0-99d3-a580eaff8fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:08.192982 containerd[1598]: time="2026-01-23T18:44:08.192805886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:44:08.257496 containerd[1598]: time="2026-01-23T18:44:08.257352450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:08.258854 containerd[1598]: time="2026-01-23T18:44:08.258763596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:44:08.258910 containerd[1598]: time="2026-01-23T18:44:08.258880484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:08.259373 kubelet[2814]: E0123 18:44:08.259245 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:44:08.259373 kubelet[2814]: E0123 18:44:08.259351 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:44:08.259506 kubelet[2814]: E0123 18:44:08.259475 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j2zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2smd_calico-system(f72bd6e0-6290-4ad0-99d3-a580eaff8fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:08.260731 kubelet[2814]: E0123 18:44:08.260679 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:44:09.126563 containerd[1598]: time="2026-01-23T18:44:09.126426597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:44:09.210216 containerd[1598]: time="2026-01-23T18:44:09.210114064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:09.211913 containerd[1598]: time="2026-01-23T18:44:09.211776186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:44:09.211913 containerd[1598]: time="2026-01-23T18:44:09.211827383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:09.212224 kubelet[2814]: E0123 18:44:09.212177 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:44:09.212629 kubelet[2814]: E0123 18:44:09.212237 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:44:09.212629 kubelet[2814]: E0123 18:44:09.212401 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2dlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bdcd99c5b-6vx2x_calico-system(c6f4bf65-2b8c-4712-a434-da7d69d938c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:09.214010 kubelet[2814]: E0123 18:44:09.213958 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:44:11.126738 containerd[1598]: time="2026-01-23T18:44:11.126435410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:44:11.186672 containerd[1598]: time="2026-01-23T18:44:11.186560730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:11.188212 containerd[1598]: time="2026-01-23T18:44:11.188173551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:44:11.188578 containerd[1598]: time="2026-01-23T18:44:11.188232137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:11.188676 kubelet[2814]: E0123 18:44:11.188564 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:11.188676 kubelet[2814]: E0123 18:44:11.188607 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:11.189473 kubelet[2814]: E0123 18:44:11.188724 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g42rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56878495cb-jls4r_calico-apiserver(2647b35f-a248-488d-8f41-2052dd32f727): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:11.190375 kubelet[2814]: E0123 18:44:11.190243 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:44:13.126132 containerd[1598]: time="2026-01-23T18:44:13.126050663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:44:13.190304 containerd[1598]: time="2026-01-23T18:44:13.190207179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:13.191430 containerd[1598]: time="2026-01-23T18:44:13.191346366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:44:13.191430 containerd[1598]: time="2026-01-23T18:44:13.191368689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:13.191619 kubelet[2814]: E0123 18:44:13.191580 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:13.192229 kubelet[2814]: E0123 18:44:13.191630 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:13.192229 kubelet[2814]: E0123 18:44:13.191770 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjcc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bd45f567-rc4xx_calico-apiserver(4067c734-cff1-4419-879a-3fc371d855f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:13.193391 kubelet[2814]: E0123 18:44:13.193351 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:44:13.459384 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:45974.service - OpenSSH per-connection server daemon (10.0.0.1:45974). Jan 23 18:44:13.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:45974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:13.467315 kernel: audit: type=1130 audit(1769193853.458:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:45974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:13.572000 audit[5115]: USER_ACCT pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.576590 sshd[5115]: Accepted publickey for core from 10.0.0.1 port 45974 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:13.576444 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:13.573000 audit[5115]: CRED_ACQ pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.583100 systemd-logind[1574]: New session 9 of user core. Jan 23 18:44:13.588134 kernel: audit: type=1101 audit(1769193853.572:754): pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.588228 kernel: audit: type=1103 audit(1769193853.573:755): pid=5115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.588318 kernel: audit: type=1006 audit(1769193853.573:756): pid=5115 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 23 18:44:13.573000 audit[5115]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc352cdd20 a2=3 a3=0 items=0 ppid=1 pid=5115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:13.601470 kernel: audit: type=1300 audit(1769193853.573:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc352cdd20 a2=3 a3=0 items=0 ppid=1 pid=5115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:13.601538 kernel: audit: type=1327 audit(1769193853.573:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:13.573000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:13.614548 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:44:13.616000 audit[5115]: USER_START pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.628365 kernel: audit: type=1105 audit(1769193853.616:757): pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.619000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.639321 kernel: audit: type=1103 audit(1769193853.619:758): pid=5119 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.735218 sshd[5119]: Connection closed by 10.0.0.1 port 45974 Jan 23 18:44:13.735607 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:13.736000 audit[5115]: USER_END pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.740414 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:45974.service: Deactivated successfully. Jan 23 18:44:13.743766 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:44:13.748495 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:44:13.750035 systemd-logind[1574]: Removed session 9. Jan 23 18:44:13.736000 audit[5115]: CRED_DISP pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.757866 kernel: audit: type=1106 audit(1769193853.736:759): pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.757942 kernel: audit: type=1104 audit(1769193853.736:760): pid=5115 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:13.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:45974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:14.126649 containerd[1598]: time="2026-01-23T18:44:14.126472426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:44:14.204336 containerd[1598]: time="2026-01-23T18:44:14.204115663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:14.205925 containerd[1598]: time="2026-01-23T18:44:14.205808846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:44:14.205925 containerd[1598]: time="2026-01-23T18:44:14.205884686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:14.206201 kubelet[2814]: E0123 18:44:14.206133 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:14.206201 kubelet[2814]: E0123 18:44:14.206185 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:14.206875 kubelet[2814]: E0123 18:44:14.206392 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8l8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56878495cb-t9bs5_calico-apiserver(50725488-4a1d-4f65-a7da-a4a923730733): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:14.207691 kubelet[2814]: E0123 18:44:14.207636 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:44:18.125329 kubelet[2814]: E0123 18:44:18.125232 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:44:18.753818 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:45988.service - OpenSSH per-connection server daemon (10.0.0.1:45988). Jan 23 18:44:18.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:45988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:18.756743 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:44:18.756839 kernel: audit: type=1130 audit(1769193858.753:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:45988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:18.834000 audit[5137]: USER_ACCT pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:18.838802 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:18.842042 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 45988 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:18.835000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:18.845746 systemd-logind[1574]: New session 10 of user core. Jan 23 18:44:18.852296 kernel: audit: type=1101 audit(1769193858.834:763): pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:18.852373 kernel: audit: type=1103 audit(1769193858.835:764): pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:18.852410 kernel: audit: type=1006 audit(1769193858.836:765): pid=5137 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 23 18:44:18.836000 audit[5137]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa3cd4110 a2=3 a3=0 items=0 ppid=1 pid=5137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:18.867446 kernel: audit: type=1300 audit(1769193858.836:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa3cd4110 a2=3 a3=0 items=0 ppid=1 pid=5137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:18.867546 kernel: audit: type=1327 audit(1769193858.836:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:18.836000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:18.876788 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:44:18.880000 audit[5137]: USER_START pid=5137 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:18.883000 audit[5141]: CRED_ACQ pid=5141 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:18.904212 kernel: audit: type=1105 audit(1769193858.880:766): pid=5137 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:18.905106 kernel: audit: type=1103 audit(1769193858.883:767): pid=5141 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:19.007210 sshd[5141]: Connection closed by 10.0.0.1 port 45988 Jan 23 18:44:19.007475 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:19.008000 audit[5137]: USER_END pid=5137 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:19.013145 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:45988.service: Deactivated successfully. Jan 23 18:44:19.016234 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:44:19.018186 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:44:19.008000 audit[5137]: CRED_DISP pid=5137 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:19.020623 systemd-logind[1574]: Removed session 10. Jan 23 18:44:19.027140 kernel: audit: type=1106 audit(1769193859.008:768): pid=5137 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:19.027228 kernel: audit: type=1104 audit(1769193859.008:769): pid=5137 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:19.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:45988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:20.148358 kubelet[2814]: E0123 18:44:20.148222 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:44:21.516306 kubelet[2814]: E0123 18:44:21.516139 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:22.128560 kubelet[2814]: E0123 18:44:22.128507 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:44:22.130161 kubelet[2814]: E0123 18:44:22.130116 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:44:23.127133 kubelet[2814]: E0123 18:44:23.127065 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:44:24.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:46904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:24.028875 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:46904.service - OpenSSH per-connection server daemon (10.0.0.1:46904). Jan 23 18:44:24.031950 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:44:24.032026 kernel: audit: type=1130 audit(1769193864.027:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:46904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:24.118000 audit[5184]: USER_ACCT pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.120481 sshd[5184]: Accepted publickey for core from 10.0.0.1 port 46904 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:24.122884 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:24.120000 audit[5184]: CRED_ACQ pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.133057 kubelet[2814]: E0123 18:44:24.132648 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:44:24.133658 systemd-logind[1574]: New session 11 of user core. Jan 23 18:44:24.139970 kernel: audit: type=1101 audit(1769193864.118:772): pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.140218 kernel: audit: type=1103 audit(1769193864.120:773): pid=5184 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.120000 audit[5184]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1c1801c0 a2=3 a3=0 items=0 ppid=1 pid=5184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:24.161004 kernel: audit: type=1006 audit(1769193864.120:774): pid=5184 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 18:44:24.161091 kernel: audit: type=1300 audit(1769193864.120:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1c1801c0 a2=3 a3=0 items=0 ppid=1 pid=5184 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:24.161147 kernel: audit: type=1327 audit(1769193864.120:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:24.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:24.167704 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:44:24.171000 audit[5184]: USER_START pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.193340 kernel: audit: type=1105 audit(1769193864.171:775): pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.174000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.202338 kernel: audit: type=1103 audit(1769193864.174:776): pid=5188 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.303034 sshd[5188]: Connection closed by 10.0.0.1 port 46904 Jan 23 18:44:24.303362 sshd-session[5184]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:24.304000 audit[5184]: USER_END pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.308857 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:46904.service: Deactivated successfully. Jan 23 18:44:24.311475 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:44:24.312864 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:44:24.314562 systemd-logind[1574]: Removed session 11. Jan 23 18:44:24.316363 kernel: audit: type=1106 audit(1769193864.304:777): pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.316459 kernel: audit: type=1104 audit(1769193864.304:778): pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.304000 audit[5184]: CRED_DISP pid=5184 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:24.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:46904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:25.125115 kubelet[2814]: E0123 18:44:25.124871 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:26.127326 kubelet[2814]: E0123 18:44:26.126354 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:44:27.125070 kubelet[2814]: E0123 18:44:27.124998 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:29.327387 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:44:29.327583 kernel: audit: type=1130 audit(1769193869.320:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.138:22-10.0.0.1:46912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:29.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.138:22-10.0.0.1:46912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:29.320996 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:46912.service - OpenSSH per-connection server daemon (10.0.0.1:46912). Jan 23 18:44:29.417000 audit[5205]: USER_ACCT pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.417715 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 46912 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:29.420668 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:29.418000 audit[5205]: CRED_ACQ pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.427832 systemd-logind[1574]: New session 12 of user core. Jan 23 18:44:29.438181 kernel: audit: type=1101 audit(1769193869.417:781): pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.438342 kernel: audit: type=1103 audit(1769193869.418:782): pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.438380 kernel: audit: type=1006 audit(1769193869.419:783): pid=5205 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 23 18:44:29.419000 audit[5205]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec2b5a480 a2=3 a3=0 items=0 ppid=1 pid=5205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:29.455110 kernel: audit: type=1300 audit(1769193869.419:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec2b5a480 a2=3 a3=0 items=0 ppid=1 pid=5205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:29.455220 kernel: audit: type=1327 audit(1769193869.419:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:29.419000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:29.461811 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:44:29.466000 audit[5205]: USER_START pid=5205 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.470000 audit[5209]: CRED_ACQ pid=5209 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.496735 kernel: audit: type=1105 audit(1769193869.466:784): pid=5205 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.496917 kernel: audit: type=1103 audit(1769193869.470:785): pid=5209 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.585611 sshd[5209]: Connection closed by 10.0.0.1 port 46912 Jan 23 18:44:29.586628 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:29.588000 audit[5205]: USER_END pid=5205 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.592387 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:46912.service: Deactivated successfully. Jan 23 18:44:29.595314 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:44:29.596509 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:44:29.598563 systemd-logind[1574]: Removed session 12. Jan 23 18:44:29.588000 audit[5205]: CRED_DISP pid=5205 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.610657 kernel: audit: type=1106 audit(1769193869.588:786): pid=5205 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.610753 kernel: audit: type=1104 audit(1769193869.588:787): pid=5205 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:29.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.138:22-10.0.0.1:46912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:31.131370 containerd[1598]: time="2026-01-23T18:44:31.131184267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:44:31.211671 containerd[1598]: time="2026-01-23T18:44:31.211565182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:31.213488 containerd[1598]: time="2026-01-23T18:44:31.213357175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:44:31.213488 containerd[1598]: time="2026-01-23T18:44:31.213414784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:31.213878 kubelet[2814]: E0123 18:44:31.213825 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:44:31.214493 kubelet[2814]: E0123 18:44:31.213901 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:44:31.214493 kubelet[2814]: E0123 18:44:31.214105 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp2lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-276fc_calico-system(72e54e47-91e4-415c-876e-aa36180ac3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:31.215400 kubelet[2814]: E0123 18:44:31.215344 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:44:34.126318 containerd[1598]: time="2026-01-23T18:44:34.126188427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:44:34.226642 containerd[1598]: time="2026-01-23T18:44:34.226520936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:34.228314 containerd[1598]: time="2026-01-23T18:44:34.228171227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:44:34.228314 containerd[1598]: time="2026-01-23T18:44:34.228222449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:34.228732 kubelet[2814]: E0123 18:44:34.228661 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:44:34.228732 kubelet[2814]: E0123 18:44:34.228749 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:44:34.229413 kubelet[2814]: E0123 18:44:34.229136 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9ffb2c439af2483eba9ce9f173bc31b4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkf2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d457c8689-kch4w_calico-system(934ceaea-a5ec-4119-99e0-f63128ff37ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:34.229610 containerd[1598]: time="2026-01-23T18:44:34.229358710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:44:34.302710 containerd[1598]: time="2026-01-23T18:44:34.302449453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:34.304236 containerd[1598]: time="2026-01-23T18:44:34.304078951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:44:34.304236 containerd[1598]: time="2026-01-23T18:44:34.304203353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:34.304556 kubelet[2814]: E0123 18:44:34.304491 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:44:34.304556 kubelet[2814]: E0123 18:44:34.304545 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:44:34.304923 kubelet[2814]: E0123 18:44:34.304745 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2dlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5bdcd99c5b-6vx2x_calico-system(c6f4bf65-2b8c-4712-a434-da7d69d938c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:34.305616 containerd[1598]: time="2026-01-23T18:44:34.305571047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:44:34.306847 kubelet[2814]: E0123 18:44:34.306706 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:44:34.369333 containerd[1598]: time="2026-01-23T18:44:34.369113369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:34.393579 containerd[1598]: time="2026-01-23T18:44:34.393212228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:34.393579 containerd[1598]: time="2026-01-23T18:44:34.393323963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:44:34.393867 kubelet[2814]: E0123 18:44:34.393716 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:44:34.393867 kubelet[2814]: E0123 18:44:34.393785 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:44:34.393963 kubelet[2814]: E0123 18:44:34.393914 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkf2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-d457c8689-kch4w_calico-system(934ceaea-a5ec-4119-99e0-f63128ff37ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:34.395731 kubelet[2814]: E0123 18:44:34.395595 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:44:34.602241 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:47892.service - OpenSSH per-connection server daemon (10.0.0.1:47892). Jan 23 18:44:34.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.138:22-10.0.0.1:47892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:34.605178 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:44:34.605243 kernel: audit: type=1130 audit(1769193874.602:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.138:22-10.0.0.1:47892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:34.695000 audit[5229]: USER_ACCT pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.697168 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 47892 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:34.700158 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:34.697000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.709077 systemd-logind[1574]: New session 13 of user core. Jan 23 18:44:34.720317 kernel: audit: type=1101 audit(1769193874.695:790): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.720439 kernel: audit: type=1103 audit(1769193874.697:791): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.727339 kernel: audit: type=1006 audit(1769193874.697:792): pid=5229 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 18:44:34.727389 kernel: audit: type=1300 audit(1769193874.697:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc34c4dc00 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:34.697000 audit[5229]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc34c4dc00 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:34.743320 kernel: audit: type=1327 audit(1769193874.697:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:34.697000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:34.744859 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:44:34.749000 audit[5229]: USER_START pid=5229 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.783037 kernel: audit: type=1105 audit(1769193874.749:793): pid=5229 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.783161 kernel: audit: type=1103 audit(1769193874.752:794): pid=5233 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.752000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.854082 sshd[5233]: Connection closed by 10.0.0.1 port 47892 Jan 23 18:44:34.854625 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:34.857000 audit[5229]: USER_END pid=5229 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.857000 audit[5229]: CRED_DISP pid=5229 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.873949 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:47892.service: Deactivated successfully. Jan 23 18:44:34.876586 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:44:34.877864 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:44:34.878691 kernel: audit: type=1106 audit(1769193874.857:795): pid=5229 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.878732 kernel: audit: type=1104 audit(1769193874.857:796): pid=5229 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.138:22-10.0.0.1:47892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:34.881382 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:47898.service - OpenSSH per-connection server daemon (10.0.0.1:47898). Jan 23 18:44:34.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.138:22-10.0.0.1:47898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:34.882921 systemd-logind[1574]: Removed session 13. Jan 23 18:44:34.951000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.952508 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 47898 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:34.953000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.953000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff16b080 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:34.953000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:34.955222 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:34.961826 systemd-logind[1574]: New session 14 of user core. Jan 23 18:44:34.985149 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:44:34.989000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:34.992000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.117611 sshd[5251]: Connection closed by 10.0.0.1 port 47898 Jan 23 18:44:35.118079 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:35.119000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.119000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.128656 containerd[1598]: time="2026-01-23T18:44:35.128558337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:44:35.133641 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:47898.service: Deactivated successfully. Jan 23 18:44:35.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.138:22-10.0.0.1:47898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:35.144893 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:44:35.146390 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:44:35.155598 systemd-logind[1574]: Removed session 14. Jan 23 18:44:35.162683 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:47906.service - OpenSSH per-connection server daemon (10.0.0.1:47906). Jan 23 18:44:35.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.138:22-10.0.0.1:47906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:35.214765 containerd[1598]: time="2026-01-23T18:44:35.213333667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:35.217344 containerd[1598]: time="2026-01-23T18:44:35.217093592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:44:35.217344 containerd[1598]: time="2026-01-23T18:44:35.217201092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:35.220926 kubelet[2814]: E0123 18:44:35.220803 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:35.220926 kubelet[2814]: E0123 18:44:35.220891 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:35.225299 kubelet[2814]: E0123 18:44:35.223582 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g42rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56878495cb-jls4r_calico-apiserver(2647b35f-a248-488d-8f41-2052dd32f727): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:35.226234 containerd[1598]: time="2026-01-23T18:44:35.226150654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:44:35.228317 kubelet[2814]: E0123 18:44:35.227168 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:44:35.292591 containerd[1598]: time="2026-01-23T18:44:35.292427566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:35.293981 containerd[1598]: time="2026-01-23T18:44:35.293844214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:44:35.293981 containerd[1598]: time="2026-01-23T18:44:35.293899318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:35.294201 kubelet[2814]: E0123 18:44:35.294106 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:44:35.294787 kubelet[2814]: E0123 18:44:35.294752 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:44:35.295030 kubelet[2814]: E0123 18:44:35.294952 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j2zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2smd_calico-system(f72bd6e0-6290-4ad0-99d3-a580eaff8fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:35.297192 containerd[1598]: time="2026-01-23T18:44:35.297079888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:44:35.304000 audit[5262]: USER_ACCT pid=5262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.305013 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 47906 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:35.306000 audit[5262]: CRED_ACQ pid=5262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.307000 audit[5262]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6ad61350 a2=3 a3=0 items=0 ppid=1 pid=5262 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:35.307000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:35.309113 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:35.316819 systemd-logind[1574]: New session 15 of user core. Jan 23 18:44:35.331806 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:44:35.336000 audit[5262]: USER_START pid=5262 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.338000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.373844 containerd[1598]: time="2026-01-23T18:44:35.373706705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:35.375405 containerd[1598]: time="2026-01-23T18:44:35.375327682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:44:35.375405 containerd[1598]: time="2026-01-23T18:44:35.375367451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:35.375699 kubelet[2814]: E0123 18:44:35.375649 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:44:35.375895 kubelet[2814]: E0123 18:44:35.375814 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:44:35.376098 kubelet[2814]: E0123 18:44:35.375991 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5j2zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2smd_calico-system(f72bd6e0-6290-4ad0-99d3-a580eaff8fda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:35.378064 kubelet[2814]: E0123 18:44:35.377955 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:44:35.445069 sshd[5266]: Connection closed by 10.0.0.1 port 47906 Jan 23 18:44:35.445611 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:35.447000 audit[5262]: USER_END pid=5262 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.447000 audit[5262]: CRED_DISP pid=5262 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:35.451770 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:47906.service: Deactivated successfully. Jan 23 18:44:35.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.138:22-10.0.0.1:47906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:35.455730 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:44:35.457929 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:44:35.459929 systemd-logind[1574]: Removed session 15. Jan 23 18:44:37.126569 containerd[1598]: time="2026-01-23T18:44:37.126363005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:44:37.214749 containerd[1598]: time="2026-01-23T18:44:37.214628195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:37.216678 containerd[1598]: time="2026-01-23T18:44:37.216432543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:44:37.216782 containerd[1598]: time="2026-01-23T18:44:37.216610423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:37.217522 kubelet[2814]: E0123 18:44:37.217411 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:37.218107 kubelet[2814]: E0123 18:44:37.217519 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:37.218826 kubelet[2814]: E0123 18:44:37.218693 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjcc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bd45f567-rc4xx_calico-apiserver(4067c734-cff1-4419-879a-3fc371d855f2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:37.221313 kubelet[2814]: E0123 18:44:37.221150 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:44:38.125972 kubelet[2814]: E0123 18:44:38.125835 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:38.131371 containerd[1598]: time="2026-01-23T18:44:38.130621178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:44:38.203141 containerd[1598]: time="2026-01-23T18:44:38.203019339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:44:38.205234 containerd[1598]: time="2026-01-23T18:44:38.205138968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:44:38.205234 containerd[1598]: time="2026-01-23T18:44:38.205188420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 18:44:38.205673 kubelet[2814]: E0123 18:44:38.205568 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:38.205740 kubelet[2814]: E0123 18:44:38.205670 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:44:38.205919 kubelet[2814]: E0123 18:44:38.205842 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8l8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-56878495cb-t9bs5_calico-apiserver(50725488-4a1d-4f65-a7da-a4a923730733): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:44:38.207302 kubelet[2814]: E0123 18:44:38.207160 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:44:40.475018 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:47914.service - OpenSSH per-connection server daemon (10.0.0.1:47914). Jan 23 18:44:40.478605 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 18:44:40.478697 kernel: audit: type=1130 audit(1769193880.475:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:47914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:40.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:47914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:40.565000 audit[5282]: USER_ACCT pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.566515 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 47914 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:40.582207 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:40.570000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.602837 kernel: audit: type=1101 audit(1769193880.565:817): pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.602960 kernel: audit: type=1103 audit(1769193880.570:818): pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.602992 kernel: audit: type=1006 audit(1769193880.570:819): pid=5282 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 23 18:44:40.604712 systemd-logind[1574]: New session 16 of user core. Jan 23 18:44:40.609374 kernel: audit: type=1300 audit(1769193880.570:819): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbb0bc300 a2=3 a3=0 items=0 ppid=1 pid=5282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:40.570000 audit[5282]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffbb0bc300 a2=3 a3=0 items=0 ppid=1 pid=5282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:40.570000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:40.624439 kernel: audit: type=1327 audit(1769193880.570:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:40.626787 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:44:40.630000 audit[5282]: USER_START pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.631000 audit[5286]: CRED_ACQ pid=5286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.655801 kernel: audit: type=1105 audit(1769193880.630:820): pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.656004 kernel: audit: type=1103 audit(1769193880.631:821): pid=5286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.824908 sshd[5286]: Connection closed by 10.0.0.1 port 47914 Jan 23 18:44:40.826471 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:40.828000 audit[5282]: USER_END pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.845351 kernel: audit: type=1106 audit(1769193880.828:822): pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.831000 audit[5282]: CRED_DISP pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.858058 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:47914.service: Deactivated successfully. Jan 23 18:44:40.858562 kernel: audit: type=1104 audit(1769193880.831:823): pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:40.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:47914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:40.860760 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:44:40.862937 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:44:40.867014 systemd-logind[1574]: Removed session 16. Jan 23 18:44:43.125723 kubelet[2814]: E0123 18:44:43.125628 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:44:44.127256 kubelet[2814]: E0123 18:44:44.126756 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:44:45.130175 kubelet[2814]: E0123 18:44:45.130047 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:44:45.840669 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:51982.service - OpenSSH per-connection server daemon (10.0.0.1:51982). Jan 23 18:44:45.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:51982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:45.842978 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:44:45.843177 kernel: audit: type=1130 audit(1769193885.840:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:51982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:45.954000 audit[5307]: USER_ACCT pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:45.955400 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 51982 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:45.958645 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:45.956000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:45.966424 systemd-logind[1574]: New session 17 of user core. Jan 23 18:44:45.990855 kernel: audit: type=1101 audit(1769193885.954:826): pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:45.991467 kernel: audit: type=1103 audit(1769193885.956:827): pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:45.991556 kernel: audit: type=1006 audit(1769193885.956:828): pid=5307 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 23 18:44:45.996470 kernel: audit: type=1300 audit(1769193885.956:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff550143f0 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:45.956000 audit[5307]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff550143f0 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:45.956000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:46.009302 kernel: audit: type=1327 audit(1769193885.956:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:46.011611 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:44:46.016000 audit[5307]: USER_START pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:46.019000 audit[5311]: CRED_ACQ pid=5311 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:46.035943 kernel: audit: type=1105 audit(1769193886.016:829): pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:46.036140 kernel: audit: type=1103 audit(1769193886.019:830): pid=5311 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:46.117475 sshd[5311]: Connection closed by 10.0.0.1 port 51982 Jan 23 18:44:46.117861 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:46.119000 audit[5307]: USER_END pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:46.124725 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:51982.service: Deactivated successfully. Jan 23 18:44:46.127644 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:44:46.131153 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:44:46.120000 audit[5307]: CRED_DISP pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:46.133108 systemd-logind[1574]: Removed session 17. Jan 23 18:44:46.139346 kernel: audit: type=1106 audit(1769193886.119:831): pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:46.139449 kernel: audit: type=1104 audit(1769193886.120:832): pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:46.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:51982 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:47.125342 kubelet[2814]: E0123 18:44:47.125114 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:44:49.127558 kubelet[2814]: E0123 18:44:49.127116 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:44:49.129195 kubelet[2814]: E0123 18:44:49.127923 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:44:49.129195 kubelet[2814]: E0123 18:44:49.128026 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:44:49.129195 kubelet[2814]: E0123 18:44:49.128138 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:44:51.132095 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:51992.service - OpenSSH per-connection server daemon (10.0.0.1:51992). Jan 23 18:44:51.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.138:22-10.0.0.1:51992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:51.134087 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:44:51.134140 kernel: audit: type=1130 audit(1769193891.131:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.138:22-10.0.0.1:51992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:51.207000 audit[5327]: USER_ACCT pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.207780 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 51992 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:51.210482 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:51.216981 systemd-logind[1574]: New session 18 of user core. Jan 23 18:44:51.208000 audit[5327]: CRED_ACQ pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.228496 kernel: audit: type=1101 audit(1769193891.207:835): pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.228625 kernel: audit: type=1103 audit(1769193891.208:836): pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.228649 kernel: audit: type=1006 audit(1769193891.208:837): pid=5327 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 18:44:51.208000 audit[5327]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdc78be10 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:51.241751 kernel: audit: type=1300 audit(1769193891.208:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdc78be10 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:51.241822 kernel: audit: type=1327 audit(1769193891.208:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:51.208000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:51.247774 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:44:51.252000 audit[5327]: USER_START pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.252000 audit[5331]: CRED_ACQ pid=5331 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.272108 kernel: audit: type=1105 audit(1769193891.252:838): pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.272207 kernel: audit: type=1103 audit(1769193891.252:839): pid=5331 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.340430 sshd[5331]: Connection closed by 10.0.0.1 port 51992 Jan 23 18:44:51.340890 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:51.342000 audit[5327]: USER_END pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.346227 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:51992.service: Deactivated successfully. Jan 23 18:44:51.348797 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:44:51.350527 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:44:51.352252 systemd-logind[1574]: Removed session 18. Jan 23 18:44:51.342000 audit[5327]: CRED_DISP pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.360136 kernel: audit: type=1106 audit(1769193891.342:840): pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.360216 kernel: audit: type=1104 audit(1769193891.342:841): pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:51.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.138:22-10.0.0.1:51992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:55.125721 kubelet[2814]: E0123 18:44:55.125616 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:44:56.360018 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:37234.service - OpenSSH per-connection server daemon (10.0.0.1:37234). Jan 23 18:44:56.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.138:22-10.0.0.1:37234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:56.363719 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:44:56.363760 kernel: audit: type=1130 audit(1769193896.358:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.138:22-10.0.0.1:37234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:56.456000 audit[5370]: USER_ACCT pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.458736 sshd[5370]: Accepted publickey for core from 10.0.0.1 port 37234 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:56.460851 sshd-session[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:56.458000 audit[5370]: CRED_ACQ pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.468587 systemd-logind[1574]: New session 19 of user core. Jan 23 18:44:56.473237 kernel: audit: type=1101 audit(1769193896.456:844): pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.473486 kernel: audit: type=1103 audit(1769193896.458:845): pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.473660 kernel: audit: type=1006 audit(1769193896.458:846): pid=5370 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 23 18:44:56.458000 audit[5370]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0c5832b0 a2=3 a3=0 items=0 ppid=1 pid=5370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:56.485500 kernel: audit: type=1300 audit(1769193896.458:846): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0c5832b0 a2=3 a3=0 items=0 ppid=1 pid=5370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:56.485574 kernel: audit: type=1327 audit(1769193896.458:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:56.458000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:56.494681 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:44:56.496000 audit[5370]: USER_START pid=5370 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.499000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.518980 kernel: audit: type=1105 audit(1769193896.496:847): pid=5370 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.519062 kernel: audit: type=1103 audit(1769193896.499:848): pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.588113 sshd[5374]: Connection closed by 10.0.0.1 port 37234 Jan 23 18:44:56.588543 sshd-session[5370]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:56.589000 audit[5370]: USER_END pid=5370 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.589000 audit[5370]: CRED_DISP pid=5370 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.607505 kernel: audit: type=1106 audit(1769193896.589:849): pid=5370 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.607572 kernel: audit: type=1104 audit(1769193896.589:850): pid=5370 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.612358 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:37234.service: Deactivated successfully. Jan 23 18:44:56.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.138:22-10.0.0.1:37234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:56.614604 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:44:56.615613 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:44:56.618661 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:37236.service - OpenSSH per-connection server daemon (10.0.0.1:37236). Jan 23 18:44:56.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.138:22-10.0.0.1:37236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:56.619591 systemd-logind[1574]: Removed session 19. Jan 23 18:44:56.689000 audit[5388]: USER_ACCT pid=5388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.691471 sshd[5388]: Accepted publickey for core from 10.0.0.1 port 37236 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:56.691000 audit[5388]: CRED_ACQ pid=5388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.691000 audit[5388]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc56ec460 a2=3 a3=0 items=0 ppid=1 pid=5388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:56.691000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:56.693876 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:56.700188 systemd-logind[1574]: New session 20 of user core. Jan 23 18:44:56.715516 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:44:56.717000 audit[5388]: USER_START pid=5388 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.719000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.984761 sshd[5392]: Connection closed by 10.0.0.1 port 37236 Jan 23 18:44:56.985192 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:56.985000 audit[5388]: USER_END pid=5388 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.986000 audit[5388]: CRED_DISP pid=5388 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:56.998410 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:37236.service: Deactivated successfully. Jan 23 18:44:56.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.138:22-10.0.0.1:37236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:57.000745 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:44:57.002122 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:44:57.005347 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:37240.service - OpenSSH per-connection server daemon (10.0.0.1:37240). Jan 23 18:44:57.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.138:22-10.0.0.1:37240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:57.006187 systemd-logind[1574]: Removed session 20. Jan 23 18:44:57.078000 audit[5403]: USER_ACCT pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.080471 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 37240 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:57.080000 audit[5403]: CRED_ACQ pid=5403 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.080000 audit[5403]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9f9cda30 a2=3 a3=0 items=0 ppid=1 pid=5403 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:57.080000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:57.083031 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:57.088705 systemd-logind[1574]: New session 21 of user core. Jan 23 18:44:57.102472 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:44:57.104000 audit[5403]: USER_START pid=5403 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.106000 audit[5407]: CRED_ACQ pid=5407 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.540483 sshd[5407]: Connection closed by 10.0.0.1 port 37240 Jan 23 18:44:57.541759 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:57.541000 audit[5422]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:44:57.541000 audit[5422]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffff87e5630 a2=0 a3=7ffff87e561c items=0 ppid=2926 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:57.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:44:57.544000 audit[5403]: USER_END pid=5403 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.545000 audit[5403]: CRED_DISP pid=5403 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.548000 audit[5422]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:44:57.548000 audit[5422]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff87e5630 a2=0 a3=0 items=0 ppid=2926 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:57.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:44:57.553206 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:37240.service: Deactivated successfully. Jan 23 18:44:57.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.138:22-10.0.0.1:37240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:57.556378 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:44:57.560503 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:44:57.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.138:22-10.0.0.1:37244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:57.564674 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:37244.service - OpenSSH per-connection server daemon (10.0.0.1:37244). Jan 23 18:44:57.567619 systemd-logind[1574]: Removed session 21. Jan 23 18:44:57.578000 audit[5429]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=5429 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:44:57.578000 audit[5429]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe1748fdb0 a2=0 a3=7ffe1748fd9c items=0 ppid=2926 pid=5429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:57.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:44:57.584000 audit[5429]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=5429 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:44:57.584000 audit[5429]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe1748fdb0 a2=0 a3=0 items=0 ppid=2926 pid=5429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:57.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:44:57.633000 audit[5428]: USER_ACCT pid=5428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.635009 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 37244 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:57.635000 audit[5428]: CRED_ACQ pid=5428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.635000 audit[5428]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8e8b31b0 a2=3 a3=0 items=0 ppid=1 pid=5428 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:57.635000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:57.637930 sshd-session[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:57.643990 systemd-logind[1574]: New session 22 of user core. Jan 23 18:44:57.657492 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:44:57.659000 audit[5428]: USER_START pid=5428 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.661000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.868558 sshd[5433]: Connection closed by 10.0.0.1 port 37244 Jan 23 18:44:57.869936 sshd-session[5428]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:57.870000 audit[5428]: USER_END pid=5428 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.871000 audit[5428]: CRED_DISP pid=5428 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.884427 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:37244.service: Deactivated successfully. Jan 23 18:44:57.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.138:22-10.0.0.1:37244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:57.886727 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:44:57.888009 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:44:57.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.138:22-10.0.0.1:37256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:57.891818 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:37256.service - OpenSSH per-connection server daemon (10.0.0.1:37256). Jan 23 18:44:57.893152 systemd-logind[1574]: Removed session 22. Jan 23 18:44:57.954000 audit[5445]: USER_ACCT pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.955687 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 37256 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:44:57.955000 audit[5445]: CRED_ACQ pid=5445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.955000 audit[5445]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce717f210 a2=3 a3=0 items=0 ppid=1 pid=5445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:44:57.955000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:44:57.958245 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:44:57.965241 systemd-logind[1574]: New session 23 of user core. Jan 23 18:44:57.968594 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:44:57.971000 audit[5445]: USER_START pid=5445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:57.973000 audit[5449]: CRED_ACQ pid=5449 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:58.105024 sshd[5449]: Connection closed by 10.0.0.1 port 37256 Jan 23 18:44:58.105435 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Jan 23 18:44:58.105000 audit[5445]: USER_END pid=5445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:58.105000 audit[5445]: CRED_DISP pid=5445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:44:58.109463 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:37256.service: Deactivated successfully. Jan 23 18:44:58.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.138:22-10.0.0.1:37256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:44:58.112406 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:44:58.115863 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:44:58.117356 systemd-logind[1574]: Removed session 23. Jan 23 18:44:58.134484 kubelet[2814]: E0123 18:44:58.134177 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:44:59.125462 kubelet[2814]: E0123 18:44:59.125375 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:45:00.128462 kubelet[2814]: E0123 18:45:00.128328 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:45:00.129497 kubelet[2814]: E0123 18:45:00.129170 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:45:01.126133 kubelet[2814]: E0123 18:45:01.126011 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:45:02.125384 kubelet[2814]: E0123 18:45:02.125321 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:45:02.741000 audit[5463]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=5463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:45:02.745138 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 23 18:45:02.745210 kernel: audit: type=1325 audit(1769193902.741:892): table=filter:151 family=2 entries=26 op=nft_register_rule pid=5463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:45:02.741000 audit[5463]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdfcc2da60 a2=0 a3=7ffdfcc2da4c items=0 ppid=2926 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:02.760042 kernel: audit: type=1300 audit(1769193902.741:892): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdfcc2da60 a2=0 a3=7ffdfcc2da4c items=0 ppid=2926 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:02.760357 kernel: audit: type=1327 audit(1769193902.741:892): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:45:02.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:45:02.771000 audit[5463]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=5463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:45:02.771000 audit[5463]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdfcc2da60 a2=0 a3=7ffdfcc2da4c items=0 ppid=2926 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:02.787677 kernel: audit: type=1325 audit(1769193902.771:893): table=nat:152 family=2 entries=104 op=nft_register_chain pid=5463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 18:45:02.787784 kernel: audit: type=1300 audit(1769193902.771:893): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdfcc2da60 a2=0 a3=7ffdfcc2da4c items=0 ppid=2926 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:02.787812 kernel: audit: type=1327 audit(1769193902.771:893): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:45:02.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 18:45:03.123071 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:40278.service - OpenSSH per-connection server daemon (10.0.0.1:40278). Jan 23 18:45:03.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:40278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:03.132372 kernel: audit: type=1130 audit(1769193903.121:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:40278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:03.191000 audit[5465]: USER_ACCT pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:03.193396 sshd[5465]: Accepted publickey for core from 10.0.0.1 port 40278 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:45:03.195567 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:45:03.193000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:03.201401 systemd-logind[1574]: New session 24 of user core. Jan 23 18:45:03.209112 kernel: audit: type=1101 audit(1769193903.191:895): pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:03.209196 kernel: audit: type=1103 audit(1769193903.193:896): pid=5465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:03.209336 kernel: audit: type=1006 audit(1769193903.193:897): pid=5465 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 23 18:45:03.193000 audit[5465]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe059df90 a2=3 a3=0 items=0 ppid=1 pid=5465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:03.193000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:45:03.219451 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:45:03.221000 audit[5465]: USER_START pid=5465 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:03.223000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:03.289628 sshd[5469]: Connection closed by 10.0.0.1 port 40278 Jan 23 18:45:03.289995 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Jan 23 18:45:03.290000 audit[5465]: USER_END pid=5465 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:03.290000 audit[5465]: CRED_DISP pid=5465 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:03.295555 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:40278.service: Deactivated successfully. Jan 23 18:45:03.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:40278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:03.297940 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:45:03.299383 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:45:03.300981 systemd-logind[1574]: Removed session 24. Jan 23 18:45:08.124431 kubelet[2814]: E0123 18:45:08.124372 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:08.303876 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:40286.service - OpenSSH per-connection server daemon (10.0.0.1:40286). Jan 23 18:45:08.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.138:22-10.0.0.1:40286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:08.305699 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 18:45:08.305751 kernel: audit: type=1130 audit(1769193908.302:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.138:22-10.0.0.1:40286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:08.367000 audit[5482]: USER_ACCT pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.368585 sshd[5482]: Accepted publickey for core from 10.0.0.1 port 40286 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:45:08.370703 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:45:08.368000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.376035 systemd-logind[1574]: New session 25 of user core. Jan 23 18:45:08.383631 kernel: audit: type=1101 audit(1769193908.367:904): pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.383671 kernel: audit: type=1103 audit(1769193908.368:905): pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.383705 kernel: audit: type=1006 audit(1769193908.368:906): pid=5482 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 23 18:45:08.368000 audit[5482]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf8d11520 a2=3 a3=0 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:08.397420 kernel: audit: type=1300 audit(1769193908.368:906): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf8d11520 a2=3 a3=0 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:08.397453 kernel: audit: type=1327 audit(1769193908.368:906): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:45:08.368000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:45:08.405512 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 18:45:08.407000 audit[5482]: USER_START pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.409000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.426760 kernel: audit: type=1105 audit(1769193908.407:907): pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.426892 kernel: audit: type=1103 audit(1769193908.409:908): pid=5486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.495366 sshd[5486]: Connection closed by 10.0.0.1 port 40286 Jan 23 18:45:08.495776 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Jan 23 18:45:08.496000 audit[5482]: USER_END pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.501107 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:40286.service: Deactivated successfully. Jan 23 18:45:08.505063 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 18:45:08.515138 kernel: audit: type=1106 audit(1769193908.496:909): pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.515180 kernel: audit: type=1104 audit(1769193908.496:910): pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.496000 audit[5482]: CRED_DISP pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:08.508191 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. Jan 23 18:45:08.511012 systemd-logind[1574]: Removed session 25. Jan 23 18:45:08.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.138:22-10.0.0.1:40286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:09.124697 kubelet[2814]: E0123 18:45:09.124604 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1" Jan 23 18:45:12.125703 kubelet[2814]: E0123 18:45:12.125601 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2smd" podUID="f72bd6e0-6290-4ad0-99d3-a580eaff8fda" Jan 23 18:45:13.124575 kubelet[2814]: E0123 18:45:13.124527 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-t9bs5" podUID="50725488-4a1d-4f65-a7da-a4a923730733" Jan 23 18:45:13.124781 kubelet[2814]: E0123 18:45:13.124608 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bd45f567-rc4xx" podUID="4067c734-cff1-4419-879a-3fc371d855f2" Jan 23 18:45:13.125078 kubelet[2814]: E0123 18:45:13.125027 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5bdcd99c5b-6vx2x" podUID="c6f4bf65-2b8c-4712-a434-da7d69d938c0" Jan 23 18:45:13.512384 systemd[1]: Started sshd@24-10.0.0.138:22-10.0.0.1:40332.service - OpenSSH per-connection server daemon (10.0.0.1:40332). Jan 23 18:45:13.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.138:22-10.0.0.1:40332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:13.514212 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:45:13.514298 kernel: audit: type=1130 audit(1769193913.511:912): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.138:22-10.0.0.1:40332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:13.581000 audit[5506]: USER_ACCT pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.582971 sshd[5506]: Accepted publickey for core from 10.0.0.1 port 40332 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:45:13.585113 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:45:13.582000 audit[5506]: CRED_ACQ pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.590784 systemd-logind[1574]: New session 26 of user core. Jan 23 18:45:13.597331 kernel: audit: type=1101 audit(1769193913.581:913): pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.597409 kernel: audit: type=1103 audit(1769193913.582:914): pid=5506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.597454 kernel: audit: type=1006 audit(1769193913.582:915): pid=5506 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 23 18:45:13.582000 audit[5506]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3d54dbb0 a2=3 a3=0 items=0 ppid=1 pid=5506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:13.609810 kernel: audit: type=1300 audit(1769193913.582:915): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3d54dbb0 a2=3 a3=0 items=0 ppid=1 pid=5506 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:13.582000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:45:13.612786 kernel: audit: type=1327 audit(1769193913.582:915): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:45:13.621563 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 18:45:13.623000 audit[5506]: USER_START pid=5506 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.623000 audit[5510]: CRED_ACQ pid=5510 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.646215 kernel: audit: type=1105 audit(1769193913.623:916): pid=5506 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.646382 kernel: audit: type=1103 audit(1769193913.623:917): pid=5510 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.699792 sshd[5510]: Connection closed by 10.0.0.1 port 40332 Jan 23 18:45:13.700115 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Jan 23 18:45:13.700000 audit[5506]: USER_END pid=5506 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.704606 systemd[1]: sshd@24-10.0.0.138:22-10.0.0.1:40332.service: Deactivated successfully. Jan 23 18:45:13.706839 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 18:45:13.709042 systemd-logind[1574]: Session 26 logged out. Waiting for processes to exit. Jan 23 18:45:13.710417 systemd-logind[1574]: Removed session 26. Jan 23 18:45:13.700000 audit[5506]: CRED_DISP pid=5506 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.720961 kernel: audit: type=1106 audit(1769193913.700:918): pid=5506 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.721015 kernel: audit: type=1104 audit(1769193913.700:919): pid=5506 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:13.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.138:22-10.0.0.1:40332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:14.125616 kubelet[2814]: E0123 18:45:14.125358 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-56878495cb-jls4r" podUID="2647b35f-a248-488d-8f41-2052dd32f727" Jan 23 18:45:14.126163 kubelet[2814]: E0123 18:45:14.125933 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d457c8689-kch4w" podUID="934ceaea-a5ec-4119-99e0-f63128ff37ad" Jan 23 18:45:17.124406 kubelet[2814]: E0123 18:45:17.124356 2814 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 18:45:18.716405 systemd[1]: Started sshd@25-10.0.0.138:22-10.0.0.1:40346.service - OpenSSH per-connection server daemon (10.0.0.1:40346). Jan 23 18:45:18.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.138:22-10.0.0.1:40346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:18.718408 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 18:45:18.718495 kernel: audit: type=1130 audit(1769193918.715:921): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.138:22-10.0.0.1:40346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:18.791000 audit[5527]: USER_ACCT pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.793229 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 40346 ssh2: RSA SHA256:tr1+OYaDVTFUuz/TM8iuIlZSJ28FUKowPQO1jHH9Q7I Jan 23 18:45:18.795861 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:45:18.793000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.807749 kernel: audit: type=1101 audit(1769193918.791:922): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.807822 kernel: audit: type=1103 audit(1769193918.793:923): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.809687 systemd-logind[1574]: New session 27 of user core. Jan 23 18:45:18.812920 kernel: audit: type=1006 audit(1769193918.793:924): pid=5527 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 23 18:45:18.812975 kernel: audit: type=1300 audit(1769193918.793:924): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc42c70170 a2=3 a3=0 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:18.793000 audit[5527]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc42c70170 a2=3 a3=0 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 18:45:18.793000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:45:18.823329 kernel: audit: type=1327 audit(1769193918.793:924): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 18:45:18.826465 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 18:45:18.828000 audit[5527]: USER_START pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.840318 kernel: audit: type=1105 audit(1769193918.828:925): pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.840376 kernel: audit: type=1103 audit(1769193918.831:926): pid=5531 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.831000 audit[5531]: CRED_ACQ pid=5531 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.965333 sshd[5531]: Connection closed by 10.0.0.1 port 40346 Jan 23 18:45:18.965690 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Jan 23 18:45:18.966000 audit[5527]: USER_END pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.977486 kernel: audit: type=1106 audit(1769193918.966:927): pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.977587 kernel: audit: type=1104 audit(1769193918.966:928): pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.966000 audit[5527]: CRED_DISP pid=5527 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 23 18:45:18.981711 systemd[1]: sshd@25-10.0.0.138:22-10.0.0.1:40346.service: Deactivated successfully. Jan 23 18:45:18.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.138:22-10.0.0.1:40346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 18:45:18.985028 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 18:45:18.986210 systemd-logind[1574]: Session 27 logged out. Waiting for processes to exit. Jan 23 18:45:18.987857 systemd-logind[1574]: Removed session 27. Jan 23 18:45:20.126129 containerd[1598]: time="2026-01-23T18:45:20.125447753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:45:20.210939 containerd[1598]: time="2026-01-23T18:45:20.210840163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:45:20.212234 containerd[1598]: time="2026-01-23T18:45:20.212168985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:45:20.212338 containerd[1598]: time="2026-01-23T18:45:20.212245267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 18:45:20.212545 kubelet[2814]: E0123 18:45:20.212491 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:45:20.213173 kubelet[2814]: E0123 18:45:20.212551 2814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:45:20.213173 kubelet[2814]: E0123 18:45:20.212722 2814 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp2lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-276fc_calico-system(72e54e47-91e4-415c-876e-aa36180ac3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:45:20.214745 kubelet[2814]: E0123 18:45:20.214698 2814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-276fc" podUID="72e54e47-91e4-415c-876e-aa36180ac3b1"