Jan 15 05:43:33.338438 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 15 03:08:43 -00 2026 Jan 15 05:43:33.338459 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:43:33.338470 kernel: BIOS-provided physical RAM map: Jan 15 05:43:33.338476 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 15 05:43:33.338482 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 15 05:43:33.338488 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 15 05:43:33.338494 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 15 05:43:33.338500 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 15 05:43:33.338506 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 15 05:43:33.338512 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 15 05:43:33.338520 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 05:43:33.338526 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 15 05:43:33.338532 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 05:43:33.338538 kernel: NX (Execute Disable) protection: active Jan 15 05:43:33.338546 kernel: APIC: Static calls initialized Jan 15 05:43:33.338554 kernel: SMBIOS 2.8 present. Jan 15 05:43:33.338560 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 15 05:43:33.338567 kernel: DMI: Memory slots populated: 1/1 Jan 15 05:43:33.338573 kernel: Hypervisor detected: KVM Jan 15 05:43:33.338579 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 15 05:43:33.338586 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 05:43:33.338592 kernel: kvm-clock: using sched offset of 4266686351 cycles Jan 15 05:43:33.338598 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 05:43:33.338605 kernel: tsc: Detected 2445.426 MHz processor Jan 15 05:43:33.338612 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 05:43:33.338621 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 05:43:33.338628 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 15 05:43:33.338635 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 15 05:43:33.338641 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 05:43:33.338648 kernel: Using GB pages for direct mapping Jan 15 05:43:33.338655 kernel: ACPI: Early table checksum verification disabled Jan 15 05:43:33.338661 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 15 05:43:33.338670 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:43:33.338677 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:43:33.338683 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:43:33.338690 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 15 05:43:33.338697 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:43:33.338703 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:43:33.338710 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:43:33.338719 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:43:33.338728 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 15 05:43:33.338735 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 15 05:43:33.338742 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 15 05:43:33.338749 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 15 05:43:33.338758 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 15 05:43:33.338765 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 15 05:43:33.338772 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 15 05:43:33.338779 kernel: No NUMA configuration found Jan 15 05:43:33.338785 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 15 05:43:33.338792 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 15 05:43:33.338801 kernel: Zone ranges: Jan 15 05:43:33.338808 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 05:43:33.338815 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 15 05:43:33.338821 kernel: Normal empty Jan 15 05:43:33.338828 kernel: Device empty Jan 15 05:43:33.338835 kernel: Movable zone start for each node Jan 15 05:43:33.338841 kernel: Early memory node ranges Jan 15 05:43:33.338848 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 15 05:43:33.338857 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 15 05:43:33.338864 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 15 05:43:33.338871 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 05:43:33.338877 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 15 05:43:33.338884 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 15 05:43:33.338891 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 05:43:33.338898 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 05:43:33.338905 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 05:43:33.338914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 05:43:33.338921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 05:43:33.338927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 05:43:33.338934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 05:43:33.338941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 05:43:33.338948 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 05:43:33.338955 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 15 05:43:33.338963 kernel: TSC deadline timer available Jan 15 05:43:33.338970 kernel: CPU topo: Max. logical packages: 1 Jan 15 05:43:33.338977 kernel: CPU topo: Max. logical dies: 1 Jan 15 05:43:33.339016 kernel: CPU topo: Max. dies per package: 1 Jan 15 05:43:33.339023 kernel: CPU topo: Max. threads per core: 1 Jan 15 05:43:33.339030 kernel: CPU topo: Num. cores per package: 4 Jan 15 05:43:33.339037 kernel: CPU topo: Num. threads per package: 4 Jan 15 05:43:33.339044 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 15 05:43:33.339053 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 05:43:33.339060 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 15 05:43:33.339067 kernel: kvm-guest: setup PV sched yield Jan 15 05:43:33.339074 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 15 05:43:33.339081 kernel: Booting paravirtualized kernel on KVM Jan 15 05:43:33.339088 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 05:43:33.339095 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 15 05:43:33.339104 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 15 05:43:33.339111 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 15 05:43:33.339117 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 15 05:43:33.339124 kernel: kvm-guest: PV spinlocks enabled Jan 15 05:43:33.339177 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 05:43:33.339185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:43:33.339193 kernel: random: crng init done Jan 15 05:43:33.339203 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 05:43:33.339210 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 05:43:33.339216 kernel: Fallback order for Node 0: 0 Jan 15 05:43:33.339224 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 15 05:43:33.339231 kernel: Policy zone: DMA32 Jan 15 05:43:33.339238 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 05:43:33.339245 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 15 05:43:33.339254 kernel: ftrace: allocating 40128 entries in 157 pages Jan 15 05:43:33.339260 kernel: ftrace: allocated 157 pages with 5 groups Jan 15 05:43:33.339267 kernel: Dynamic Preempt: voluntary Jan 15 05:43:33.339274 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 05:43:33.339282 kernel: rcu: RCU event tracing is enabled. Jan 15 05:43:33.339289 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 15 05:43:33.339296 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 05:43:33.339303 kernel: Rude variant of Tasks RCU enabled. Jan 15 05:43:33.339312 kernel: Tracing variant of Tasks RCU enabled. Jan 15 05:43:33.339319 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 05:43:33.339326 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 15 05:43:33.339333 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:43:33.339340 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:43:33.339347 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:43:33.339354 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 15 05:43:33.339363 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 05:43:33.339376 kernel: Console: colour VGA+ 80x25 Jan 15 05:43:33.339385 kernel: printk: legacy console [ttyS0] enabled Jan 15 05:43:33.339392 kernel: ACPI: Core revision 20240827 Jan 15 05:43:33.339400 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 15 05:43:33.339407 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 05:43:33.339414 kernel: x2apic enabled Jan 15 05:43:33.339421 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 05:43:33.339430 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 15 05:43:33.339446 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 15 05:43:33.339459 kernel: kvm-guest: setup PV IPIs Jan 15 05:43:33.339472 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 15 05:43:33.339483 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 15 05:43:33.339497 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 15 05:43:33.339508 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 05:43:33.339522 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 15 05:43:33.339533 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 15 05:43:33.339547 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 05:43:33.339558 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 05:43:33.339569 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 15 05:43:33.339582 kernel: Speculative Store Bypass: Vulnerable Jan 15 05:43:33.339596 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 15 05:43:33.339608 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 15 05:43:33.339619 kernel: active return thunk: srso_alias_return_thunk Jan 15 05:43:33.339632 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 15 05:43:33.339643 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 15 05:43:33.339657 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 15 05:43:33.339672 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 05:43:33.339686 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 05:43:33.339697 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 05:43:33.339710 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 05:43:33.339722 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 15 05:43:33.339735 kernel: Freeing SMP alternatives memory: 32K Jan 15 05:43:33.339747 kernel: pid_max: default: 32768 minimum: 301 Jan 15 05:43:33.339757 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 05:43:33.339765 kernel: landlock: Up and running. Jan 15 05:43:33.339772 kernel: SELinux: Initializing. Jan 15 05:43:33.339779 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 05:43:33.339787 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 05:43:33.339794 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 15 05:43:33.339801 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 15 05:43:33.339811 kernel: signal: max sigframe size: 1776 Jan 15 05:43:33.339818 kernel: rcu: Hierarchical SRCU implementation. Jan 15 05:43:33.339825 kernel: rcu: Max phase no-delay instances is 400. Jan 15 05:43:33.339832 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 05:43:33.339840 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 05:43:33.339847 kernel: smp: Bringing up secondary CPUs ... Jan 15 05:43:33.339854 kernel: smpboot: x86: Booting SMP configuration: Jan 15 05:43:33.339863 kernel: .... node #0, CPUs: #1 #2 #3 Jan 15 05:43:33.339870 kernel: smp: Brought up 1 node, 4 CPUs Jan 15 05:43:33.339878 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 15 05:43:33.339885 kernel: Memory: 2445292K/2571752K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120520K reserved, 0K cma-reserved) Jan 15 05:43:33.339896 kernel: devtmpfs: initialized Jan 15 05:43:33.339903 kernel: x86/mm: Memory block size: 128MB Jan 15 05:43:33.339911 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 05:43:33.339920 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 15 05:43:33.339927 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 05:43:33.339934 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 05:43:33.339942 kernel: audit: initializing netlink subsys (disabled) Jan 15 05:43:33.339949 kernel: audit: type=2000 audit(1768455810.031:1): state=initialized audit_enabled=0 res=1 Jan 15 05:43:33.339956 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 05:43:33.339963 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 05:43:33.339972 kernel: cpuidle: using governor menu Jan 15 05:43:33.339979 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 05:43:33.340028 kernel: dca service started, version 1.12.1 Jan 15 05:43:33.340036 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 15 05:43:33.340043 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 15 05:43:33.340050 kernel: PCI: Using configuration type 1 for base access Jan 15 05:43:33.340057 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 05:43:33.340067 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 05:43:33.340074 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 05:43:33.340081 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 05:43:33.340089 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 05:43:33.340096 kernel: ACPI: Added _OSI(Module Device) Jan 15 05:43:33.340103 kernel: ACPI: Added _OSI(Processor Device) Jan 15 05:43:33.340110 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 05:43:33.340117 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 05:43:33.340172 kernel: ACPI: Interpreter enabled Jan 15 05:43:33.340180 kernel: ACPI: PM: (supports S0 S3 S5) Jan 15 05:43:33.340187 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 05:43:33.340194 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 05:43:33.340202 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 05:43:33.340209 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 05:43:33.340216 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 05:43:33.340453 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 05:43:33.340632 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 15 05:43:33.340807 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 15 05:43:33.340817 kernel: PCI host bridge to bus 0000:00 Jan 15 05:43:33.341026 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 05:43:33.341258 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 05:43:33.341418 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 05:43:33.341573 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 15 05:43:33.341728 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 15 05:43:33.341882 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 15 05:43:33.342076 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 05:43:33.342341 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 15 05:43:33.342522 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 15 05:43:33.342695 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 15 05:43:33.342860 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 15 05:43:33.343063 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 15 05:43:33.343309 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 05:43:33.343486 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 15 05:43:33.343653 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 15 05:43:33.343818 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 15 05:43:33.344017 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 15 05:43:33.344329 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 15 05:43:33.344509 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 15 05:43:33.344678 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 15 05:43:33.344845 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 15 05:43:33.345068 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 15 05:43:33.345298 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 15 05:43:33.345473 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 15 05:43:33.345639 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 15 05:43:33.345804 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 15 05:43:33.345980 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 15 05:43:33.346276 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 05:43:33.346456 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 15 05:43:33.346631 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 15 05:43:33.346802 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 15 05:43:33.346979 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 15 05:43:33.347323 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 15 05:43:33.347336 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 05:43:33.347344 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 05:43:33.347356 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 05:43:33.347363 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 05:43:33.347370 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 05:43:33.347378 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 05:43:33.347385 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 05:43:33.347392 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 05:43:33.347399 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 05:43:33.347409 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 05:43:33.347416 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 05:43:33.347423 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 05:43:33.347430 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 05:43:33.347438 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 05:43:33.347445 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 05:43:33.347452 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 05:43:33.347462 kernel: iommu: Default domain type: Translated Jan 15 05:43:33.347469 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 05:43:33.347476 kernel: PCI: Using ACPI for IRQ routing Jan 15 05:43:33.347484 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 05:43:33.347491 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 15 05:43:33.347498 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 15 05:43:33.347668 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 05:43:33.347839 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 05:43:33.348049 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 05:43:33.348061 kernel: vgaarb: loaded Jan 15 05:43:33.348068 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 15 05:43:33.348076 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 15 05:43:33.348083 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 05:43:33.348090 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 05:43:33.348101 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 05:43:33.348108 kernel: pnp: PnP ACPI init Jan 15 05:43:33.348403 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 15 05:43:33.348416 kernel: pnp: PnP ACPI: found 6 devices Jan 15 05:43:33.348424 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 05:43:33.348432 kernel: NET: Registered PF_INET protocol family Jan 15 05:43:33.348443 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 05:43:33.348450 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 05:43:33.348457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 05:43:33.348465 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 05:43:33.348472 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 05:43:33.348479 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 05:43:33.348486 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 05:43:33.348496 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 05:43:33.348503 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 05:43:33.348510 kernel: NET: Registered PF_XDP protocol family Jan 15 05:43:33.348670 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 05:43:33.348826 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 05:43:33.348980 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 05:43:33.349237 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 15 05:43:33.349399 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 15 05:43:33.349553 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 15 05:43:33.349562 kernel: PCI: CLS 0 bytes, default 64 Jan 15 05:43:33.349570 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 15 05:43:33.349578 kernel: Initialise system trusted keyrings Jan 15 05:43:33.349585 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 05:43:33.349592 kernel: Key type asymmetric registered Jan 15 05:43:33.349602 kernel: Asymmetric key parser 'x509' registered Jan 15 05:43:33.349609 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 05:43:33.349617 kernel: io scheduler mq-deadline registered Jan 15 05:43:33.349624 kernel: io scheduler kyber registered Jan 15 05:43:33.349631 kernel: io scheduler bfq registered Jan 15 05:43:33.349639 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 05:43:33.349647 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 05:43:33.349656 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 05:43:33.349663 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 05:43:33.349671 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 05:43:33.349678 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 05:43:33.349685 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 05:43:33.349692 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 05:43:33.349700 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 05:43:33.349871 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 15 05:43:33.349882 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 05:43:33.350091 kernel: rtc_cmos 00:04: registered as rtc0 Jan 15 05:43:33.350441 kernel: rtc_cmos 00:04: setting system clock to 2026-01-15T05:43:31 UTC (1768455811) Jan 15 05:43:33.350607 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 15 05:43:33.350617 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 15 05:43:33.350628 kernel: NET: Registered PF_INET6 protocol family Jan 15 05:43:33.350635 kernel: Segment Routing with IPv6 Jan 15 05:43:33.350642 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 05:43:33.350650 kernel: NET: Registered PF_PACKET protocol family Jan 15 05:43:33.350771 kernel: Key type dns_resolver registered Jan 15 05:43:33.350780 kernel: IPI shorthand broadcast: enabled Jan 15 05:43:33.350828 kernel: sched_clock: Marking stable (2187017406, 386467717)->(2743019152, -169534029) Jan 15 05:43:33.350836 kernel: registered taskstats version 1 Jan 15 05:43:33.350846 kernel: Loading compiled-in X.509 certificates Jan 15 05:43:33.350853 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a89cae614c389520e311ccbffccefdc95226b716' Jan 15 05:43:33.350861 kernel: Demotion targets for Node 0: null Jan 15 05:43:33.350868 kernel: Key type .fscrypt registered Jan 15 05:43:33.350875 kernel: Key type fscrypt-provisioning registered Jan 15 05:43:33.350882 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 05:43:33.350892 kernel: ima: Allocated hash algorithm: sha1 Jan 15 05:43:33.350899 kernel: ima: No architecture policies found Jan 15 05:43:33.350906 kernel: clk: Disabling unused clocks Jan 15 05:43:33.350914 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 15 05:43:33.350921 kernel: Write protecting the kernel read-only data: 47104k Jan 15 05:43:33.350928 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 15 05:43:33.350935 kernel: Run /init as init process Jan 15 05:43:33.350943 kernel: with arguments: Jan 15 05:43:33.350952 kernel: /init Jan 15 05:43:33.350959 kernel: with environment: Jan 15 05:43:33.350966 kernel: HOME=/ Jan 15 05:43:33.350973 kernel: TERM=linux Jan 15 05:43:33.350980 kernel: SCSI subsystem initialized Jan 15 05:43:33.351029 kernel: libata version 3.00 loaded. Jan 15 05:43:33.351261 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 05:43:33.351278 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 05:43:33.351446 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 15 05:43:33.351611 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 15 05:43:33.351777 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 05:43:33.351965 kernel: scsi host0: ahci Jan 15 05:43:33.352259 kernel: scsi host1: ahci Jan 15 05:43:33.352447 kernel: scsi host2: ahci Jan 15 05:43:33.352625 kernel: scsi host3: ahci Jan 15 05:43:33.352827 kernel: scsi host4: ahci Jan 15 05:43:33.353099 kernel: scsi host5: ahci Jan 15 05:43:33.353118 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Jan 15 05:43:33.353210 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Jan 15 05:43:33.353220 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Jan 15 05:43:33.353227 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Jan 15 05:43:33.353235 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Jan 15 05:43:33.353243 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Jan 15 05:43:33.353251 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 05:43:33.353259 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 05:43:33.353273 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 05:43:33.353281 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 15 05:43:33.353289 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 05:43:33.353297 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 15 05:43:33.353305 kernel: ata3.00: applying bridge limits Jan 15 05:43:33.353312 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 05:43:33.353320 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 05:43:33.353329 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 05:43:33.353337 kernel: ata3.00: configured for UDMA/100 Jan 15 05:43:33.353543 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 15 05:43:33.353725 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 15 05:43:33.353893 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 15 05:43:33.353903 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 05:43:33.353915 kernel: GPT:16515071 != 27000831 Jan 15 05:43:33.353922 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 05:43:33.353930 kernel: GPT:16515071 != 27000831 Jan 15 05:43:33.353937 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 05:43:33.353945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 05:43:33.354250 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 15 05:43:33.354263 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 05:43:33.354453 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 15 05:43:33.354464 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 05:43:33.354472 kernel: device-mapper: uevent: version 1.0.3 Jan 15 05:43:33.354480 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 05:43:33.354488 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 15 05:43:33.354496 kernel: raid6: avx2x4 gen() 39125 MB/s Jan 15 05:43:33.354503 kernel: raid6: avx2x2 gen() 39106 MB/s Jan 15 05:43:33.354514 kernel: raid6: avx2x1 gen() 29733 MB/s Jan 15 05:43:33.354522 kernel: raid6: using algorithm avx2x4 gen() 39125 MB/s Jan 15 05:43:33.354529 kernel: raid6: .... xor() 5225 MB/s, rmw enabled Jan 15 05:43:33.354539 kernel: raid6: using avx2x2 recovery algorithm Jan 15 05:43:33.354547 kernel: xor: automatically using best checksumming function avx Jan 15 05:43:33.354554 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 05:43:33.354562 kernel: BTRFS: device fsid 0b6e2cdd-9800-410c-b18c-88de6acfe8db devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (182) Jan 15 05:43:33.354572 kernel: BTRFS info (device dm-0): first mount of filesystem 0b6e2cdd-9800-410c-b18c-88de6acfe8db Jan 15 05:43:33.354580 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:43:33.354588 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 05:43:33.354596 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 05:43:33.354605 kernel: loop: module loaded Jan 15 05:43:33.354613 kernel: loop0: detected capacity change from 0 to 100536 Jan 15 05:43:33.354621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 05:43:33.354630 systemd[1]: Successfully made /usr/ read-only. Jan 15 05:43:33.354640 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 05:43:33.354649 systemd[1]: Detected virtualization kvm. Jan 15 05:43:33.354659 systemd[1]: Detected architecture x86-64. Jan 15 05:43:33.354667 systemd[1]: Running in initrd. Jan 15 05:43:33.354675 systemd[1]: No hostname configured, using default hostname. Jan 15 05:43:33.354684 systemd[1]: Hostname set to . Jan 15 05:43:33.354692 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 05:43:33.354700 systemd[1]: Queued start job for default target initrd.target. Jan 15 05:43:33.354708 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 05:43:33.354719 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:43:33.354727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:43:33.354736 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 05:43:33.354744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 05:43:33.354752 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 05:43:33.354763 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 05:43:33.354771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:43:33.354780 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:43:33.354790 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 05:43:33.354798 systemd[1]: Reached target paths.target - Path Units. Jan 15 05:43:33.354806 systemd[1]: Reached target slices.target - Slice Units. Jan 15 05:43:33.354814 systemd[1]: Reached target swap.target - Swaps. Jan 15 05:43:33.354825 systemd[1]: Reached target timers.target - Timer Units. Jan 15 05:43:33.354833 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 05:43:33.354841 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 05:43:33.354849 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:43:33.354857 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 05:43:33.354865 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 05:43:33.354874 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:43:33.354884 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 05:43:33.354892 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:43:33.354900 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 05:43:33.354908 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 05:43:33.354917 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 05:43:33.354925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 05:43:33.354935 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 05:43:33.354943 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 05:43:33.354951 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 05:43:33.354960 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 05:43:33.354968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 05:43:33.354978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:43:33.355025 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 05:43:33.355035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:43:33.355043 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 05:43:33.355076 systemd-journald[320]: Collecting audit messages is enabled. Jan 15 05:43:33.355100 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 05:43:33.355109 systemd-journald[320]: Journal started Jan 15 05:43:33.355216 systemd-journald[320]: Runtime Journal (/run/log/journal/8658875c7f8b46b0b7acac0f57e1a2dd) is 6M, max 48.2M, 42.1M free. Jan 15 05:43:33.367362 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 05:43:33.367388 kernel: audit: type=1130 audit(1768455813.358:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.368320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 05:43:33.523821 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 05:43:33.523850 kernel: Bridge firewalling registered Jan 15 05:43:33.377542 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 15 05:43:33.549419 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 05:43:33.563109 kernel: audit: type=1130 audit(1768455813.549:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.563303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:43:33.579443 kernel: audit: type=1130 audit(1768455813.563:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.579815 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 05:43:33.595810 kernel: audit: type=1130 audit(1768455813.579:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.585360 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 05:43:33.603674 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 05:43:33.603785 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 05:43:33.616181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 05:43:33.623503 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:43:33.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.634295 kernel: audit: type=1130 audit(1768455813.623:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.649595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:43:33.660030 kernel: audit: type=1130 audit(1768455813.649:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.673352 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:43:33.693257 kernel: audit: type=1130 audit(1768455813.673:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.693282 kernel: audit: type=1334 audit(1768455813.674:9): prog-id=6 op=LOAD Jan 15 05:43:33.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.674000 audit: BPF prog-id=6 op=LOAD Jan 15 05:43:33.675803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 05:43:33.697487 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 05:43:33.713467 kernel: audit: type=1130 audit(1768455813.701:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.705481 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 05:43:33.747754 dracut-cmdline[360]: dracut-109 Jan 15 05:43:33.756752 dracut-cmdline[360]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:43:33.772713 systemd-resolved[356]: Positive Trust Anchors: Jan 15 05:43:33.772724 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 05:43:33.772732 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 05:43:33.772780 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 05:43:33.799406 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 15 05:43:33.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:33.800845 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 05:43:33.812316 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:43:33.912220 kernel: Loading iSCSI transport class v2.0-870. Jan 15 05:43:33.927211 kernel: iscsi: registered transport (tcp) Jan 15 05:43:33.953788 kernel: iscsi: registered transport (qla4xxx) Jan 15 05:43:33.953868 kernel: QLogic iSCSI HBA Driver Jan 15 05:43:33.988654 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 05:43:34.015767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:43:34.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.020666 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 05:43:34.097632 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 05:43:34.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.103440 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 05:43:34.109484 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 05:43:34.151803 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 05:43:34.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.156000 audit: BPF prog-id=7 op=LOAD Jan 15 05:43:34.156000 audit: BPF prog-id=8 op=LOAD Jan 15 05:43:34.157744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:43:34.194688 systemd-udevd[583]: Using default interface naming scheme 'v257'. Jan 15 05:43:34.211798 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:43:34.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.219521 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 05:43:34.258040 dracut-pre-trigger[632]: rd.md=0: removing MD RAID activation Jan 15 05:43:34.303765 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 05:43:34.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.304236 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 05:43:34.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.317000 audit: BPF prog-id=9 op=LOAD Jan 15 05:43:34.318110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 05:43:34.319575 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 05:43:34.381173 systemd-networkd[726]: lo: Link UP Jan 15 05:43:34.381197 systemd-networkd[726]: lo: Gained carrier Jan 15 05:43:34.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.381936 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 05:43:34.384797 systemd[1]: Reached target network.target - Network. Jan 15 05:43:34.446394 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:43:34.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.457083 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 05:43:34.510496 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 05:43:34.542433 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 05:43:34.561749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 05:43:34.566360 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 05:43:34.583446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 05:43:34.590672 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 15 05:43:34.599253 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 05:43:34.626053 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 15 05:43:34.626102 kernel: audit: type=1131 audit(1768455814.608:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.604733 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 05:43:34.632268 kernel: AES CTR mode by8 optimization enabled Jan 15 05:43:34.604869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:43:34.608723 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:43:34.625472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:43:34.661818 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:43:34.661857 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 05:43:34.662859 systemd-networkd[726]: eth0: Link UP Jan 15 05:43:34.664301 systemd-networkd[726]: eth0: Gained carrier Jan 15 05:43:34.664312 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:43:34.680311 disk-uuid[834]: Primary Header is updated. Jan 15 05:43:34.680311 disk-uuid[834]: Secondary Entries is updated. Jan 15 05:43:34.680311 disk-uuid[834]: Secondary Header is updated. Jan 15 05:43:34.701222 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 05:43:34.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.772963 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 05:43:34.883182 kernel: audit: type=1130 audit(1768455814.856:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.857766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 05:43:34.880023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:43:34.883258 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 05:43:34.901815 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 05:43:34.909337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:43:34.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.924241 kernel: audit: type=1130 audit(1768455814.909:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.955394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 05:43:34.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:34.969221 kernel: audit: type=1130 audit(1768455814.961:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:35.720691 disk-uuid[835]: Warning: The kernel is still using the old partition table. Jan 15 05:43:35.720691 disk-uuid[835]: The new table will be used at the next reboot or after you Jan 15 05:43:35.720691 disk-uuid[835]: run partprobe(8) or kpartx(8) Jan 15 05:43:35.720691 disk-uuid[835]: The operation has completed successfully. Jan 15 05:43:35.737047 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 05:43:35.737283 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 05:43:35.760892 kernel: audit: type=1130 audit(1768455815.743:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:35.760918 kernel: audit: type=1131 audit(1768455815.743:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:35.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:35.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:35.745267 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 05:43:35.750242 systemd-networkd[726]: eth0: Gained IPv6LL Jan 15 05:43:35.807479 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Jan 15 05:43:35.807516 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:43:35.807528 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:43:35.817516 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:43:35.817586 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:43:35.829231 kernel: BTRFS info (device vda6): last unmount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:43:35.831039 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 05:43:35.843336 kernel: audit: type=1130 audit(1768455815.831:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:35.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:35.832569 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 05:43:35.951303 ignition[879]: Ignition 2.24.0 Jan 15 05:43:35.952292 ignition[879]: Stage: fetch-offline Jan 15 05:43:35.952348 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:43:35.952361 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:43:35.952455 ignition[879]: parsed url from cmdline: "" Jan 15 05:43:35.952460 ignition[879]: no config URL provided Jan 15 05:43:35.952465 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 05:43:35.952476 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 15 05:43:35.952518 ignition[879]: op(1): [started] loading QEMU firmware config module Jan 15 05:43:35.952525 ignition[879]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 15 05:43:35.964121 ignition[879]: op(1): [finished] loading QEMU firmware config module Jan 15 05:43:36.193679 ignition[879]: parsing config with SHA512: 353236b6d436e2628b6daa86b08d905c6c54ee4c6997434ac1a9c9ffc072e43bbf3e19563720a93f387a309cf3a036dc552d625ec8651568d30102b219f6cc77 Jan 15 05:43:36.199527 unknown[879]: fetched base config from "system" Jan 15 05:43:36.199573 unknown[879]: fetched user config from "qemu" Jan 15 05:43:36.199976 ignition[879]: fetch-offline: fetch-offline passed Jan 15 05:43:36.204396 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 05:43:36.221454 kernel: audit: type=1130 audit(1768455816.205:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:36.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:36.200086 ignition[879]: Ignition finished successfully Jan 15 05:43:36.205691 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 15 05:43:36.206878 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 05:43:36.267072 ignition[891]: Ignition 2.24.0 Jan 15 05:43:36.267107 ignition[891]: Stage: kargs Jan 15 05:43:36.267300 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:43:36.267312 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:43:36.268426 ignition[891]: kargs: kargs passed Jan 15 05:43:36.268475 ignition[891]: Ignition finished successfully Jan 15 05:43:36.282654 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 05:43:36.285275 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 05:43:36.310263 kernel: audit: type=1130 audit(1768455816.282:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:36.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:36.341720 ignition[898]: Ignition 2.24.0 Jan 15 05:43:36.341766 ignition[898]: Stage: disks Jan 15 05:43:36.341916 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:43:36.341926 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:43:36.342764 ignition[898]: disks: disks passed Jan 15 05:43:36.342808 ignition[898]: Ignition finished successfully Jan 15 05:43:36.369531 kernel: audit: type=1130 audit(1768455816.356:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:36.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:36.353354 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 05:43:36.357394 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 05:43:36.374349 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 05:43:36.382202 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 05:43:36.389744 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 05:43:36.396946 systemd[1]: Reached target basic.target - Basic System. Jan 15 05:43:36.408721 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 05:43:36.457303 systemd-fsck[907]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 15 05:43:36.465533 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 05:43:36.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:36.476507 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 05:43:36.638229 kernel: EXT4-fs (vda9): mounted filesystem a9a0585b-a83b-49e4-a2e7-8f2fc277193d r/w with ordered data mode. Quota mode: none. Jan 15 05:43:36.638748 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 05:43:36.644432 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 05:43:36.653189 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 05:43:36.658773 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 05:43:36.659260 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 05:43:36.659302 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 05:43:36.659327 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 05:43:36.685226 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 05:43:36.693636 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (915) Jan 15 05:43:36.689895 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 05:43:36.704388 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:43:36.704414 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:43:36.715045 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:43:36.715082 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:43:36.716939 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 05:43:36.919917 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 05:43:36.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:36.927304 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 05:43:36.939650 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 05:43:36.962964 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 05:43:36.970859 kernel: BTRFS info (device vda6): last unmount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:43:36.989990 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 05:43:36.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:37.009477 ignition[1012]: INFO : Ignition 2.24.0 Jan 15 05:43:37.009477 ignition[1012]: INFO : Stage: mount Jan 15 05:43:37.017611 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:43:37.017611 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:43:37.017611 ignition[1012]: INFO : mount: mount passed Jan 15 05:43:37.017611 ignition[1012]: INFO : Ignition finished successfully Jan 15 05:43:37.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:37.012386 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 05:43:37.017301 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 05:43:37.048551 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 05:43:37.068258 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1025) Jan 15 05:43:37.075934 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:43:37.075963 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:43:37.085069 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:43:37.085097 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:43:37.087257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 05:43:37.130102 ignition[1042]: INFO : Ignition 2.24.0 Jan 15 05:43:37.130102 ignition[1042]: INFO : Stage: files Jan 15 05:43:37.135495 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:43:37.135495 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:43:37.135495 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Jan 15 05:43:37.135495 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 05:43:37.135495 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 05:43:37.154875 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 05:43:37.154875 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 05:43:37.154875 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 05:43:37.154875 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 05:43:37.154875 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 15 05:43:37.138087 unknown[1042]: wrote ssh authorized keys file for user: core Jan 15 05:43:37.204927 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 05:43:37.307712 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 15 05:43:37.307712 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 05:43:37.319268 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 15 05:43:37.642673 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 05:43:37.998713 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 15 05:43:37.998713 ignition[1042]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 05:43:38.009835 ignition[1042]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 05:43:38.019117 ignition[1042]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 05:43:38.019117 ignition[1042]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 05:43:38.019117 ignition[1042]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 15 05:43:38.019117 ignition[1042]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 05:43:38.039964 ignition[1042]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 05:43:38.039964 ignition[1042]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 15 05:43:38.039964 ignition[1042]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 15 05:43:38.055240 ignition[1042]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 05:43:38.055240 ignition[1042]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 05:43:38.055240 ignition[1042]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 15 05:43:38.055240 ignition[1042]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 15 05:43:38.055240 ignition[1042]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 05:43:38.055240 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 05:43:38.055240 ignition[1042]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 05:43:38.055240 ignition[1042]: INFO : files: files passed Jan 15 05:43:38.055240 ignition[1042]: INFO : Ignition finished successfully Jan 15 05:43:38.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.070544 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 05:43:38.076628 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 05:43:38.108948 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 05:43:38.113348 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 05:43:38.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.113503 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 05:43:38.138488 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Jan 15 05:43:38.142816 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:43:38.142816 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:43:38.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.155495 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:43:38.146931 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 05:43:38.147840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 05:43:38.170316 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 05:43:38.251645 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 05:43:38.251822 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 05:43:38.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.258830 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 05:43:38.265667 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 05:43:38.275037 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 05:43:38.279799 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 05:43:38.324996 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 05:43:38.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.326576 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 05:43:38.366756 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 05:43:38.366967 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:43:38.371125 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:43:38.372206 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 05:43:38.388758 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 05:43:38.388956 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 05:43:38.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.399227 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 05:43:38.399513 systemd[1]: Stopped target basic.target - Basic System. Jan 15 05:43:38.405248 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 05:43:38.410286 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 05:43:38.419577 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 05:43:38.429304 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 05:43:38.429568 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 05:43:38.441561 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 05:43:38.445729 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 05:43:38.457060 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 05:43:38.464194 systemd[1]: Stopped target swap.target - Swaps. Jan 15 05:43:38.467528 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 05:43:38.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.467763 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 05:43:38.481853 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:43:38.482308 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:43:38.495759 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 05:43:38.499693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:43:38.503923 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 05:43:38.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.504284 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 05:43:38.514721 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 05:43:38.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.514882 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 05:43:38.517962 systemd[1]: Stopped target paths.target - Path Units. Jan 15 05:43:38.524337 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 05:43:38.535498 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:43:38.535853 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 05:43:38.545335 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 05:43:38.546329 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 05:43:38.546518 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 05:43:38.557368 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 05:43:38.557563 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 05:43:38.562649 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 15 05:43:38.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.562808 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:43:38.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.565734 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 05:43:38.565964 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 05:43:38.578115 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 05:43:38.578387 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 05:43:38.590542 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 05:43:38.599589 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 05:43:38.602773 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:43:38.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.622263 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 05:43:38.627566 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 05:43:38.627815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:43:38.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.640081 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 05:43:38.640585 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:43:38.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.650807 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 05:43:38.651069 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 05:43:38.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.660812 ignition[1099]: INFO : Ignition 2.24.0 Jan 15 05:43:38.660812 ignition[1099]: INFO : Stage: umount Jan 15 05:43:38.660812 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:43:38.660812 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:43:38.660812 ignition[1099]: INFO : umount: umount passed Jan 15 05:43:38.660812 ignition[1099]: INFO : Ignition finished successfully Jan 15 05:43:38.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.664442 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 05:43:38.664585 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 05:43:38.671996 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 05:43:38.672626 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 05:43:38.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.675995 systemd[1]: Stopped target network.target - Network. Jan 15 05:43:38.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.686850 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 05:43:38.686940 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 05:43:38.693818 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 05:43:38.693908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 05:43:38.703292 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 05:43:38.703413 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 05:43:38.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.709895 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 05:43:38.710060 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 05:43:38.719842 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 05:43:38.747287 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 05:43:38.748513 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 05:43:38.757408 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 05:43:38.757598 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 05:43:38.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.768901 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 05:43:38.769112 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 05:43:38.769000 audit: BPF prog-id=6 op=UNLOAD Jan 15 05:43:38.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.780806 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 05:43:38.780988 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 05:43:38.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.790000 audit: BPF prog-id=9 op=UNLOAD Jan 15 05:43:38.791311 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 05:43:38.799046 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 05:43:38.799273 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:43:38.806570 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 05:43:38.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.806658 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 05:43:38.814740 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 05:43:38.819508 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 05:43:38.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.819638 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 05:43:38.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.822882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 05:43:38.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.822939 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:43:38.835241 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 05:43:38.835300 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 05:43:38.842709 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:43:38.882716 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 05:43:38.883068 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:43:38.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.893269 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 05:43:38.893322 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 05:43:38.903674 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 05:43:38.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.903717 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:43:38.912913 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 05:43:38.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.912980 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 05:43:38.924413 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 05:43:38.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.924493 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 05:43:38.933265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 05:43:38.933329 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 05:43:38.943667 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 05:43:38.948671 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 05:43:38.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.948777 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:43:38.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.951639 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 05:43:38.951704 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:43:38.964634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 05:43:38.964713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:43:38.972082 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 05:43:38.985286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 05:43:38.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.995774 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 05:43:38.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:38.995943 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 05:43:39.005548 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 05:43:39.017897 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 05:43:39.042772 systemd[1]: Switching root. Jan 15 05:43:39.092497 systemd-journald[320]: Journal stopped Jan 15 05:43:40.650666 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 15 05:43:40.650724 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 05:43:40.650749 kernel: SELinux: policy capability open_perms=1 Jan 15 05:43:40.650764 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 05:43:40.650776 kernel: SELinux: policy capability always_check_network=0 Jan 15 05:43:40.650787 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 05:43:40.650801 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 05:43:40.650812 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 05:43:40.650825 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 05:43:40.650840 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 05:43:40.650851 systemd[1]: Successfully loaded SELinux policy in 74.175ms. Jan 15 05:43:40.650876 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.686ms. Jan 15 05:43:40.650888 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 05:43:40.650900 systemd[1]: Detected virtualization kvm. Jan 15 05:43:40.650911 systemd[1]: Detected architecture x86-64. Jan 15 05:43:40.650925 systemd[1]: Detected first boot. Jan 15 05:43:40.650936 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 05:43:40.650948 zram_generator::config[1143]: No configuration found. Jan 15 05:43:40.650961 kernel: Guest personality initialized and is inactive Jan 15 05:43:40.650973 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 15 05:43:40.650984 kernel: Initialized host personality Jan 15 05:43:40.650995 kernel: NET: Registered PF_VSOCK protocol family Jan 15 05:43:40.651008 systemd[1]: Populated /etc with preset unit settings. Jan 15 05:43:40.651054 kernel: kauditd_printk_skb: 53 callbacks suppressed Jan 15 05:43:40.651070 kernel: audit: type=1334 audit(1768455820.015:86): prog-id=12 op=LOAD Jan 15 05:43:40.651082 kernel: audit: type=1334 audit(1768455820.015:87): prog-id=3 op=UNLOAD Jan 15 05:43:40.651093 kernel: audit: type=1334 audit(1768455820.015:88): prog-id=13 op=LOAD Jan 15 05:43:40.651105 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 05:43:40.651116 kernel: audit: type=1334 audit(1768455820.016:89): prog-id=14 op=LOAD Jan 15 05:43:40.651172 kernel: audit: type=1334 audit(1768455820.016:90): prog-id=4 op=UNLOAD Jan 15 05:43:40.651185 kernel: audit: type=1334 audit(1768455820.016:91): prog-id=5 op=UNLOAD Jan 15 05:43:40.651198 kernel: audit: type=1131 audit(1768455820.017:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.651210 kernel: audit: type=1334 audit(1768455820.037:93): prog-id=12 op=UNLOAD Jan 15 05:43:40.651222 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 05:43:40.651234 kernel: audit: type=1130 audit(1768455820.053:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.651245 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 05:43:40.651259 kernel: audit: type=1131 audit(1768455820.053:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.651274 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 05:43:40.651286 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 05:43:40.651297 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 05:43:40.651309 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 05:43:40.651323 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 05:43:40.651335 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 05:43:40.651347 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 05:43:40.651359 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 05:43:40.651370 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:43:40.651382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:43:40.651393 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 05:43:40.651407 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 05:43:40.651418 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 05:43:40.651430 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 05:43:40.651442 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 05:43:40.651453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:43:40.651465 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:43:40.651476 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 05:43:40.651490 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 05:43:40.651505 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 05:43:40.651516 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 05:43:40.651527 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:43:40.651539 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 05:43:40.651551 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 15 05:43:40.651563 systemd[1]: Reached target slices.target - Slice Units. Jan 15 05:43:40.651575 systemd[1]: Reached target swap.target - Swaps. Jan 15 05:43:40.651588 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 05:43:40.651599 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 05:43:40.651611 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 05:43:40.651622 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:43:40.651634 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 15 05:43:40.651645 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:43:40.651656 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 15 05:43:40.651670 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 15 05:43:40.651682 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 05:43:40.651694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:43:40.651705 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 05:43:40.651717 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 05:43:40.651729 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 05:43:40.651741 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 05:43:40.651754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:43:40.651766 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 05:43:40.651777 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 05:43:40.651788 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 05:43:40.651800 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 05:43:40.651813 systemd[1]: Reached target machines.target - Containers. Jan 15 05:43:40.651826 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 05:43:40.651838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 05:43:40.651849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 05:43:40.651861 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 05:43:40.651872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 05:43:40.651884 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 05:43:40.651895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 05:43:40.651908 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 05:43:40.651920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 05:43:40.651931 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 05:43:40.651943 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 05:43:40.651954 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 05:43:40.651965 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 05:43:40.651977 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 05:43:40.651991 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 05:43:40.652002 kernel: ACPI: bus type drm_connector registered Jan 15 05:43:40.652013 kernel: fuse: init (API version 7.41) Jan 15 05:43:40.652058 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 05:43:40.652070 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 05:43:40.652082 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 05:43:40.652094 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 05:43:40.652106 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 05:43:40.652118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 05:43:40.652171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:43:40.652204 systemd-journald[1229]: Collecting audit messages is enabled. Jan 15 05:43:40.652229 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 05:43:40.652241 systemd-journald[1229]: Journal started Jan 15 05:43:40.652260 systemd-journald[1229]: Runtime Journal (/run/log/journal/8658875c7f8b46b0b7acac0f57e1a2dd) is 6M, max 48.2M, 42.1M free. Jan 15 05:43:40.662233 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 05:43:40.662261 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 05:43:40.303000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 15 05:43:40.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.573000 audit: BPF prog-id=14 op=UNLOAD Jan 15 05:43:40.573000 audit: BPF prog-id=13 op=UNLOAD Jan 15 05:43:40.575000 audit: BPF prog-id=15 op=LOAD Jan 15 05:43:40.578000 audit: BPF prog-id=16 op=LOAD Jan 15 05:43:40.578000 audit: BPF prog-id=17 op=LOAD Jan 15 05:43:40.648000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 15 05:43:40.648000 audit[1229]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe38a1ef20 a2=4000 a3=0 items=0 ppid=1 pid=1229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:40.648000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 15 05:43:39.990892 systemd[1]: Queued start job for default target multi-user.target. Jan 15 05:43:40.016622 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 05:43:40.017490 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 05:43:40.670252 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 05:43:40.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.674601 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 05:43:40.678233 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 05:43:40.681666 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 05:43:40.684956 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 05:43:40.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.688915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:43:40.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.693110 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 05:43:40.693499 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 05:43:40.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.698005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 05:43:40.698527 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 05:43:40.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.702382 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 05:43:40.702624 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 05:43:40.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.706338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 05:43:40.706601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 05:43:40.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.710747 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 05:43:40.710999 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 05:43:40.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.714806 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 05:43:40.715246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 05:43:40.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.718920 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 05:43:40.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.722975 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:43:40.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.727993 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 05:43:40.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.732558 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 05:43:40.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.749525 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 05:43:40.753869 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 15 05:43:40.759330 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 05:43:40.764399 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 05:43:40.768323 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 05:43:40.768357 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 05:43:40.772625 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 05:43:40.776919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 05:43:40.777233 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 05:43:40.785431 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 05:43:40.790263 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 05:43:40.794318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 05:43:40.795436 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 05:43:40.798712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 05:43:40.799841 systemd-journald[1229]: Time spent on flushing to /var/log/journal/8658875c7f8b46b0b7acac0f57e1a2dd is 24.958ms for 1097 entries. Jan 15 05:43:40.799841 systemd-journald[1229]: System Journal (/var/log/journal/8658875c7f8b46b0b7acac0f57e1a2dd) is 8M, max 163.5M, 155.5M free. Jan 15 05:43:40.840372 systemd-journald[1229]: Received client request to flush runtime journal. Jan 15 05:43:40.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.802798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 05:43:40.809954 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 05:43:40.817346 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 05:43:40.822332 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:43:40.826371 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 05:43:40.829959 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 05:43:40.834593 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 05:43:40.842769 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 05:43:40.851196 kernel: loop1: detected capacity change from 0 to 224512 Jan 15 05:43:40.853495 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 05:43:40.857904 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 05:43:40.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.862976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:43:40.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.889316 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 05:43:40.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.893120 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 05:43:40.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.899000 audit: BPF prog-id=18 op=LOAD Jan 15 05:43:40.899000 audit: BPF prog-id=19 op=LOAD Jan 15 05:43:40.900000 audit: BPF prog-id=20 op=LOAD Jan 15 05:43:40.905202 kernel: loop2: detected capacity change from 0 to 111560 Jan 15 05:43:40.901098 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 15 05:43:40.908266 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 05:43:40.906000 audit: BPF prog-id=21 op=LOAD Jan 15 05:43:40.913412 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 05:43:40.919000 audit: BPF prog-id=22 op=LOAD Jan 15 05:43:40.919000 audit: BPF prog-id=23 op=LOAD Jan 15 05:43:40.924000 audit: BPF prog-id=24 op=LOAD Jan 15 05:43:40.924908 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 15 05:43:40.929000 audit: BPF prog-id=25 op=LOAD Jan 15 05:43:40.929000 audit: BPF prog-id=26 op=LOAD Jan 15 05:43:40.929000 audit: BPF prog-id=27 op=LOAD Jan 15 05:43:40.930390 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 05:43:40.944402 kernel: loop3: detected capacity change from 0 to 50784 Jan 15 05:43:40.956364 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 15 05:43:40.956593 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jan 15 05:43:40.964541 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:43:40.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.982549 systemd-nsresourced[1286]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 15 05:43:40.984259 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 15 05:43:40.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:40.993249 kernel: loop4: detected capacity change from 0 to 224512 Jan 15 05:43:41.004243 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 05:43:41.010296 kernel: loop5: detected capacity change from 0 to 111560 Jan 15 05:43:41.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:41.024251 kernel: loop6: detected capacity change from 0 to 50784 Jan 15 05:43:41.025560 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 05:43:41.035185 (sd-merge)[1298]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 15 05:43:41.041535 (sd-merge)[1298]: Merged extensions into '/usr'. Jan 15 05:43:41.047293 systemd[1]: Reload requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 05:43:41.047422 systemd[1]: Reloading... Jan 15 05:43:41.080708 systemd-oomd[1282]: No swap; memory pressure usage will be degraded Jan 15 05:43:41.093402 systemd-resolved[1283]: Positive Trust Anchors: Jan 15 05:43:41.093688 systemd-resolved[1283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 05:43:41.093734 systemd-resolved[1283]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 05:43:41.093797 systemd-resolved[1283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 05:43:41.101771 systemd-resolved[1283]: Defaulting to hostname 'linux'. Jan 15 05:43:41.126195 zram_generator::config[1331]: No configuration found. Jan 15 05:43:41.313669 systemd[1]: Reloading finished in 265 ms. Jan 15 05:43:41.349547 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 15 05:43:41.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:41.353873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 05:43:41.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:41.357741 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 05:43:41.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:41.362238 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 05:43:41.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:41.372363 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:43:41.401835 systemd[1]: Starting ensure-sysext.service... Jan 15 05:43:41.405339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 05:43:41.408000 audit: BPF prog-id=8 op=UNLOAD Jan 15 05:43:41.408000 audit: BPF prog-id=7 op=UNLOAD Jan 15 05:43:41.409000 audit: BPF prog-id=28 op=LOAD Jan 15 05:43:41.409000 audit: BPF prog-id=29 op=LOAD Jan 15 05:43:41.410566 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:43:41.415000 audit: BPF prog-id=30 op=LOAD Jan 15 05:43:41.415000 audit: BPF prog-id=18 op=UNLOAD Jan 15 05:43:41.415000 audit: BPF prog-id=31 op=LOAD Jan 15 05:43:41.415000 audit: BPF prog-id=32 op=LOAD Jan 15 05:43:41.415000 audit: BPF prog-id=19 op=UNLOAD Jan 15 05:43:41.415000 audit: BPF prog-id=20 op=UNLOAD Jan 15 05:43:41.416000 audit: BPF prog-id=33 op=LOAD Jan 15 05:43:41.416000 audit: BPF prog-id=22 op=UNLOAD Jan 15 05:43:41.416000 audit: BPF prog-id=34 op=LOAD Jan 15 05:43:41.416000 audit: BPF prog-id=35 op=LOAD Jan 15 05:43:41.417000 audit: BPF prog-id=23 op=UNLOAD Jan 15 05:43:41.417000 audit: BPF prog-id=24 op=UNLOAD Jan 15 05:43:41.418000 audit: BPF prog-id=36 op=LOAD Jan 15 05:43:41.418000 audit: BPF prog-id=15 op=UNLOAD Jan 15 05:43:41.418000 audit: BPF prog-id=37 op=LOAD Jan 15 05:43:41.418000 audit: BPF prog-id=38 op=LOAD Jan 15 05:43:41.418000 audit: BPF prog-id=16 op=UNLOAD Jan 15 05:43:41.418000 audit: BPF prog-id=17 op=UNLOAD Jan 15 05:43:41.419000 audit: BPF prog-id=39 op=LOAD Jan 15 05:43:41.419000 audit: BPF prog-id=21 op=UNLOAD Jan 15 05:43:41.422000 audit: BPF prog-id=40 op=LOAD Jan 15 05:43:41.422000 audit: BPF prog-id=25 op=UNLOAD Jan 15 05:43:41.422000 audit: BPF prog-id=41 op=LOAD Jan 15 05:43:41.422000 audit: BPF prog-id=42 op=LOAD Jan 15 05:43:41.422000 audit: BPF prog-id=26 op=UNLOAD Jan 15 05:43:41.422000 audit: BPF prog-id=27 op=UNLOAD Jan 15 05:43:41.430081 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 05:43:41.430537 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 05:43:41.430610 systemd[1]: Reload requested from client PID 1372 ('systemctl') (unit ensure-sysext.service)... Jan 15 05:43:41.430625 systemd[1]: Reloading... Jan 15 05:43:41.430859 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 05:43:41.432303 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 15 05:43:41.432401 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 15 05:43:41.439591 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 05:43:41.439621 systemd-tmpfiles[1373]: Skipping /boot Jan 15 05:43:41.446266 systemd-udevd[1374]: Using default interface naming scheme 'v257'. Jan 15 05:43:41.452713 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 05:43:41.452757 systemd-tmpfiles[1373]: Skipping /boot Jan 15 05:43:41.510207 zram_generator::config[1410]: No configuration found. Jan 15 05:43:41.590239 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 05:43:41.609336 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 05:43:41.618242 kernel: ACPI: button: Power Button [PWRF] Jan 15 05:43:41.655117 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 05:43:41.655639 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 05:43:41.761080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 05:43:41.765547 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 05:43:41.765909 systemd[1]: Reloading finished in 334 ms. Jan 15 05:43:41.782580 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:43:41.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:41.791000 audit: BPF prog-id=43 op=LOAD Jan 15 05:43:41.792000 audit: BPF prog-id=39 op=UNLOAD Jan 15 05:43:41.795000 audit: BPF prog-id=44 op=LOAD Jan 15 05:43:41.799000 audit: BPF prog-id=30 op=UNLOAD Jan 15 05:43:41.799000 audit: BPF prog-id=45 op=LOAD Jan 15 05:43:41.799000 audit: BPF prog-id=46 op=LOAD Jan 15 05:43:41.799000 audit: BPF prog-id=31 op=UNLOAD Jan 15 05:43:41.799000 audit: BPF prog-id=32 op=UNLOAD Jan 15 05:43:41.800000 audit: BPF prog-id=47 op=LOAD Jan 15 05:43:41.800000 audit: BPF prog-id=48 op=LOAD Jan 15 05:43:41.800000 audit: BPF prog-id=28 op=UNLOAD Jan 15 05:43:41.800000 audit: BPF prog-id=29 op=UNLOAD Jan 15 05:43:41.805000 audit: BPF prog-id=49 op=LOAD Jan 15 05:43:41.805000 audit: BPF prog-id=36 op=UNLOAD Jan 15 05:43:41.805000 audit: BPF prog-id=50 op=LOAD Jan 15 05:43:41.805000 audit: BPF prog-id=51 op=LOAD Jan 15 05:43:41.805000 audit: BPF prog-id=37 op=UNLOAD Jan 15 05:43:41.805000 audit: BPF prog-id=38 op=UNLOAD Jan 15 05:43:41.806000 audit: BPF prog-id=52 op=LOAD Jan 15 05:43:41.806000 audit: BPF prog-id=33 op=UNLOAD Jan 15 05:43:41.806000 audit: BPF prog-id=53 op=LOAD Jan 15 05:43:41.806000 audit: BPF prog-id=54 op=LOAD Jan 15 05:43:41.806000 audit: BPF prog-id=34 op=UNLOAD Jan 15 05:43:41.806000 audit: BPF prog-id=35 op=UNLOAD Jan 15 05:43:41.808000 audit: BPF prog-id=55 op=LOAD Jan 15 05:43:41.808000 audit: BPF prog-id=40 op=UNLOAD Jan 15 05:43:41.808000 audit: BPF prog-id=56 op=LOAD Jan 15 05:43:41.808000 audit: BPF prog-id=57 op=LOAD Jan 15 05:43:41.808000 audit: BPF prog-id=41 op=UNLOAD Jan 15 05:43:41.808000 audit: BPF prog-id=42 op=UNLOAD Jan 15 05:43:41.816511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:43:41.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:41.901063 systemd[1]: Finished ensure-sysext.service. Jan 15 05:43:41.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:41.907480 kernel: kvm_amd: TSC scaling supported Jan 15 05:43:41.907544 kernel: kvm_amd: Nested Virtualization enabled Jan 15 05:43:41.907588 kernel: kvm_amd: Nested Paging enabled Jan 15 05:43:41.910863 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 15 05:43:41.910901 kernel: kvm_amd: PMU virtualization is disabled Jan 15 05:43:41.953822 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:43:41.955562 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 05:43:41.963407 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 05:43:41.967469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 05:43:41.973266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 05:43:41.980709 kernel: EDAC MC: Ver: 3.0.0 Jan 15 05:43:41.978753 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 05:43:41.984440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 05:43:41.990656 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 05:43:41.994242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 05:43:41.994418 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 05:43:41.996005 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 05:43:42.001448 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 05:43:42.005762 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 05:43:42.014571 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 05:43:42.021000 audit: BPF prog-id=58 op=LOAD Jan 15 05:43:42.023565 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 05:43:42.028000 audit: BPF prog-id=59 op=LOAD Jan 15 05:43:42.030336 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 05:43:42.040770 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 05:43:42.043884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:43:42.044479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:43:42.045904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 05:43:42.047288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 05:43:42.047763 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 05:43:42.048065 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 05:43:42.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.051274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 05:43:42.051545 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 05:43:42.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.059963 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 05:43:42.060366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 05:43:42.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.066170 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 05:43:42.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:42.073852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 05:43:42.073911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 05:43:42.090825 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 05:43:42.093000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 05:43:42.093000 audit[1531]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff4917f980 a2=420 a3=0 items=0 ppid=1487 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:42.093000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:43:42.093954 augenrules[1531]: No rules Jan 15 05:43:42.095622 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 05:43:42.095940 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 05:43:42.100583 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 05:43:42.110051 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 05:43:42.111621 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 05:43:42.161910 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 05:43:42.173240 systemd-networkd[1504]: lo: Link UP Jan 15 05:43:42.173266 systemd-networkd[1504]: lo: Gained carrier Jan 15 05:43:42.176470 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:43:42.176479 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 05:43:42.178644 systemd-networkd[1504]: eth0: Link UP Jan 15 05:43:42.179564 systemd-networkd[1504]: eth0: Gained carrier Jan 15 05:43:42.179598 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:43:42.192211 systemd-networkd[1504]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 05:43:42.192866 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Jan 15 05:43:43.312350 systemd-resolved[1283]: Clock change detected. Flushing caches. Jan 15 05:43:43.312387 systemd-timesyncd[1506]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 15 05:43:43.312480 systemd-timesyncd[1506]: Initial clock synchronization to Thu 2026-01-15 05:43:43.312255 UTC. Jan 15 05:43:43.450270 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 05:43:43.454871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:43:43.461497 systemd[1]: Reached target network.target - Network. Jan 15 05:43:43.464893 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 05:43:43.470366 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 05:43:43.477384 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 05:43:43.503653 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 05:43:43.641207 ldconfig[1497]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 05:43:43.647685 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 05:43:43.653603 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 05:43:43.687356 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 05:43:43.693062 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 05:43:43.698649 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 05:43:43.704700 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 05:43:43.712034 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 15 05:43:43.716615 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 05:43:43.720566 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 05:43:43.724535 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 15 05:43:43.728780 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 15 05:43:43.732160 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 05:43:43.735993 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 05:43:43.736034 systemd[1]: Reached target paths.target - Path Units. Jan 15 05:43:43.738873 systemd[1]: Reached target timers.target - Timer Units. Jan 15 05:43:43.743023 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 05:43:43.748590 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 05:43:43.753953 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 05:43:43.758636 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 05:43:43.762534 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 05:43:43.768736 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 05:43:43.773041 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 05:43:43.778281 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 05:43:43.782702 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 05:43:43.785732 systemd[1]: Reached target basic.target - Basic System. Jan 15 05:43:43.788682 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 05:43:43.788741 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 05:43:43.790099 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 05:43:43.794973 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 05:43:43.800945 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 05:43:43.818699 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 05:43:43.825536 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 05:43:43.830465 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 05:43:43.831955 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 15 05:43:43.836701 jq[1556]: false Jan 15 05:43:43.838370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 05:43:43.844583 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 05:43:43.845221 extend-filesystems[1557]: Found /dev/vda6 Jan 15 05:43:43.850263 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 05:43:43.855976 extend-filesystems[1557]: Found /dev/vda9 Jan 15 05:43:43.846965 oslogin_cache_refresh[1558]: Refreshing passwd entry cache Jan 15 05:43:43.868837 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Refreshing passwd entry cache Jan 15 05:43:43.869061 extend-filesystems[1557]: Checking size of /dev/vda9 Jan 15 05:43:43.861897 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 05:43:43.871239 oslogin_cache_refresh[1558]: Failure getting users, quitting Jan 15 05:43:43.880009 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Failure getting users, quitting Jan 15 05:43:43.880009 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 05:43:43.880009 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Refreshing group entry cache Jan 15 05:43:43.880074 extend-filesystems[1557]: Resized partition /dev/vda9 Jan 15 05:43:43.874637 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 05:43:43.871259 oslogin_cache_refresh[1558]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 05:43:43.910797 extend-filesystems[1577]: resize2fs 1.47.3 (8-Jul-2025) Jan 15 05:43:43.917596 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Failure getting groups, quitting Jan 15 05:43:43.917596 google_oslogin_nss_cache[1558]: oslogin_cache_refresh[1558]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 05:43:43.875306 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 05:43:43.871359 oslogin_cache_refresh[1558]: Refreshing group entry cache Jan 15 05:43:43.875874 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 05:43:43.908724 oslogin_cache_refresh[1558]: Failure getting groups, quitting Jan 15 05:43:43.879766 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 05:43:43.908738 oslogin_cache_refresh[1558]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 05:43:43.885736 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 05:43:43.892355 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 05:43:43.893705 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 05:43:43.893970 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 05:43:43.894292 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 05:43:43.894663 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 05:43:43.896924 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 05:43:43.897175 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 05:43:43.913310 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 15 05:43:43.913820 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 15 05:43:43.937597 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 15 05:43:43.937652 jq[1580]: true Jan 15 05:43:43.968384 jq[1598]: true Jan 15 05:43:43.974532 update_engine[1579]: I20260115 05:43:43.973829 1579 main.cc:92] Flatcar Update Engine starting Jan 15 05:43:43.980761 tar[1583]: linux-amd64/LICENSE Jan 15 05:43:43.980761 tar[1583]: linux-amd64/helm Jan 15 05:43:43.998497 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 15 05:43:44.011780 dbus-daemon[1554]: [system] SELinux support is enabled Jan 15 05:43:44.012131 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 05:43:44.021587 update_engine[1579]: I20260115 05:43:44.019762 1579 update_check_scheduler.cc:74] Next update check in 6m48s Jan 15 05:43:44.021044 systemd-logind[1578]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 05:43:44.021068 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 05:43:44.022560 systemd-logind[1578]: New seat seat0. Jan 15 05:43:44.026674 extend-filesystems[1577]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 05:43:44.026674 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 15 05:43:44.026674 extend-filesystems[1577]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 15 05:43:44.050899 extend-filesystems[1557]: Resized filesystem in /dev/vda9 Jan 15 05:43:44.028791 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 05:43:44.029517 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 05:43:44.056546 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 05:43:44.056598 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 05:43:44.064803 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 05:43:44.064839 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 05:43:44.070774 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 05:43:44.077015 systemd[1]: Started update-engine.service - Update Engine. Jan 15 05:43:44.084795 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 05:43:44.095638 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Jan 15 05:43:44.098107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 05:43:44.105113 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 05:43:44.153758 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 05:43:44.159893 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 05:43:44.188560 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 05:43:44.197742 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 05:43:44.227767 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 05:43:44.228256 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 05:43:44.235987 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 05:43:44.248008 containerd[1603]: time="2026-01-15T05:43:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 05:43:44.249403 containerd[1603]: time="2026-01-15T05:43:44.249314049Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.261875437Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.729µs" Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.261911504Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.261953122Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.261968240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.262142105Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.262157614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.262217646Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.262228496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.262557480Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.262573220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.262586895Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263249 containerd[1603]: time="2026-01-15T05:43:44.262594980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.262774876Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.262787199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.262875063Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.263118007Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.263163322Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.263174201Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.263235636Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.263518505Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 05:43:44.263594 containerd[1603]: time="2026-01-15T05:43:44.263590308Z" level=info msg="metadata content store policy set" policy=shared Jan 15 05:43:44.265773 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 05:43:44.269974 containerd[1603]: time="2026-01-15T05:43:44.269914215Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.269984536Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270067220Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270079734Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270092477Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270102576Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270113517Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270121642Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270132302Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270144304Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270153662Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270162949Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270171455Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 05:43:44.270256 containerd[1603]: time="2026-01-15T05:43:44.270182436Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270315564Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270484069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270514806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270534903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270549781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270559118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270571412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270580318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270590447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270600867Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270610965Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270634590Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270677099Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270688600Z" level=info msg="Start snapshots syncer" Jan 15 05:43:44.270744 containerd[1603]: time="2026-01-15T05:43:44.270708217Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 05:43:44.271117 containerd[1603]: time="2026-01-15T05:43:44.270950540Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 05:43:44.271117 containerd[1603]: time="2026-01-15T05:43:44.270994281Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271064082Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271199674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271222597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271232667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271241713Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271251762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271260899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271270397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271280696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271290094Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 05:43:44.271367 containerd[1603]: time="2026-01-15T05:43:44.271373249Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271388297Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271396462Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271404677Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271411931Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271473345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271483875Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271499785Z" level=info msg="runtime interface created" Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271505075Z" level=info msg="created NRI interface" Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271514072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271525133Z" level=info msg="Connect containerd service" Jan 15 05:43:44.271738 containerd[1603]: time="2026-01-15T05:43:44.271543186Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 05:43:44.273396 containerd[1603]: time="2026-01-15T05:43:44.273180152Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 05:43:44.274556 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 05:43:44.281800 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 05:43:44.286879 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 05:43:44.389578 tar[1583]: linux-amd64/README.md Jan 15 05:43:44.399643 containerd[1603]: time="2026-01-15T05:43:44.399572879Z" level=info msg="Start subscribing containerd event" Jan 15 05:43:44.399747 containerd[1603]: time="2026-01-15T05:43:44.399678797Z" level=info msg="Start recovering state" Jan 15 05:43:44.399809 containerd[1603]: time="2026-01-15T05:43:44.399773514Z" level=info msg="Start event monitor" Jan 15 05:43:44.399865 containerd[1603]: time="2026-01-15T05:43:44.399830640Z" level=info msg="Start cni network conf syncer for default" Jan 15 05:43:44.399865 containerd[1603]: time="2026-01-15T05:43:44.399860957Z" level=info msg="Start streaming server" Jan 15 05:43:44.399903 containerd[1603]: time="2026-01-15T05:43:44.399871056Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 05:43:44.399903 containerd[1603]: time="2026-01-15T05:43:44.399880403Z" level=info msg="runtime interface starting up..." Jan 15 05:43:44.399903 containerd[1603]: time="2026-01-15T05:43:44.399887366Z" level=info msg="starting plugins..." Jan 15 05:43:44.399903 containerd[1603]: time="2026-01-15T05:43:44.399902385Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 05:43:44.400098 containerd[1603]: time="2026-01-15T05:43:44.400036084Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 05:43:44.400295 containerd[1603]: time="2026-01-15T05:43:44.400209548Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 05:43:44.400627 containerd[1603]: time="2026-01-15T05:43:44.400514287Z" level=info msg="containerd successfully booted in 0.152917s" Jan 15 05:43:44.400761 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 05:43:44.409177 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 05:43:44.475919 systemd-networkd[1504]: eth0: Gained IPv6LL Jan 15 05:43:44.479691 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 05:43:44.486789 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 05:43:44.494864 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 15 05:43:44.518116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:43:44.524297 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 05:43:44.561163 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 15 05:43:44.561764 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 15 05:43:44.567785 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 05:43:44.574099 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 05:43:45.411812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:43:45.417244 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 05:43:45.417657 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:43:45.421615 systemd[1]: Startup finished in 3.414s (kernel) + 6.471s (initrd) + 5.070s (userspace) = 14.956s. Jan 15 05:43:45.863137 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 05:43:45.865884 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:33436.service - OpenSSH per-connection server daemon (10.0.0.1:33436). Jan 15 05:43:45.885220 kubelet[1696]: E0115 05:43:45.885140 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:43:45.887943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:43:45.888114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:43:45.888755 systemd[1]: kubelet.service: Consumed 973ms CPU time, 264.7M memory peak. Jan 15 05:43:45.953001 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 33436 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:43:45.955300 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:43:45.963286 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 05:43:45.964727 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 05:43:45.971367 systemd-logind[1578]: New session 1 of user core. Jan 15 05:43:45.994013 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 05:43:45.997896 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 05:43:46.025172 (systemd)[1716]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:43:46.028978 systemd-logind[1578]: New session 2 of user core. Jan 15 05:43:46.191391 systemd[1716]: Queued start job for default target default.target. Jan 15 05:43:46.200294 systemd[1716]: Created slice app.slice - User Application Slice. Jan 15 05:43:46.200398 systemd[1716]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 15 05:43:46.200477 systemd[1716]: Reached target paths.target - Paths. Jan 15 05:43:46.200564 systemd[1716]: Reached target timers.target - Timers. Jan 15 05:43:46.202689 systemd[1716]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 05:43:46.204065 systemd[1716]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 15 05:43:46.217010 systemd[1716]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 05:43:46.217474 systemd[1716]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 15 05:43:46.217631 systemd[1716]: Reached target sockets.target - Sockets. Jan 15 05:43:46.217699 systemd[1716]: Reached target basic.target - Basic System. Jan 15 05:43:46.217744 systemd[1716]: Reached target default.target - Main User Target. Jan 15 05:43:46.217777 systemd[1716]: Startup finished in 182ms. Jan 15 05:43:46.218644 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 05:43:46.221062 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 05:43:46.245073 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:33442.service - OpenSSH per-connection server daemon (10.0.0.1:33442). Jan 15 05:43:46.310383 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 33442 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:43:46.311853 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:43:46.317809 systemd-logind[1578]: New session 3 of user core. Jan 15 05:43:46.335655 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 05:43:46.350830 sshd[1734]: Connection closed by 10.0.0.1 port 33442 Jan 15 05:43:46.351481 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 15 05:43:46.359678 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:33442.service: Deactivated successfully. Jan 15 05:43:46.361519 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 05:43:46.362605 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Jan 15 05:43:46.365188 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:33450.service - OpenSSH per-connection server daemon (10.0.0.1:33450). Jan 15 05:43:46.365910 systemd-logind[1578]: Removed session 3. Jan 15 05:43:46.425921 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 33450 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:43:46.427283 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:43:46.432693 systemd-logind[1578]: New session 4 of user core. Jan 15 05:43:46.443643 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 05:43:46.454630 sshd[1745]: Connection closed by 10.0.0.1 port 33450 Jan 15 05:43:46.455054 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 15 05:43:46.464122 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:33450.service: Deactivated successfully. Jan 15 05:43:46.466106 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 05:43:46.467123 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Jan 15 05:43:46.470179 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:33464.service - OpenSSH per-connection server daemon (10.0.0.1:33464). Jan 15 05:43:46.470896 systemd-logind[1578]: Removed session 4. Jan 15 05:43:46.541573 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 33464 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:43:46.543076 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:43:46.548853 systemd-logind[1578]: New session 5 of user core. Jan 15 05:43:46.563636 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 05:43:46.581157 sshd[1756]: Connection closed by 10.0.0.1 port 33464 Jan 15 05:43:46.581712 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 15 05:43:46.601709 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:33464.service: Deactivated successfully. Jan 15 05:43:46.603624 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 05:43:46.604812 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Jan 15 05:43:46.607715 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:33480.service - OpenSSH per-connection server daemon (10.0.0.1:33480). Jan 15 05:43:46.608270 systemd-logind[1578]: Removed session 5. Jan 15 05:43:46.671618 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 33480 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:43:46.673316 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:43:46.679531 systemd-logind[1578]: New session 6 of user core. Jan 15 05:43:46.690631 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 05:43:46.721245 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 05:43:46.721882 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:43:46.736901 sudo[1768]: pam_unix(sudo:session): session closed for user root Jan 15 05:43:46.738759 sshd[1767]: Connection closed by 10.0.0.1 port 33480 Jan 15 05:43:46.739190 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jan 15 05:43:46.753519 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:33480.service: Deactivated successfully. Jan 15 05:43:46.755672 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 05:43:46.756772 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Jan 15 05:43:46.759988 systemd[1]: Started sshd@5-10.0.0.123:22-10.0.0.1:33482.service - OpenSSH per-connection server daemon (10.0.0.1:33482). Jan 15 05:43:46.761666 systemd-logind[1578]: Removed session 6. Jan 15 05:43:46.820001 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 33482 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:43:46.821752 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:43:46.827630 systemd-logind[1578]: New session 7 of user core. Jan 15 05:43:46.834726 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 05:43:46.851398 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 05:43:46.851812 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:43:46.857613 sudo[1781]: pam_unix(sudo:session): session closed for user root Jan 15 05:43:46.865615 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 05:43:46.865966 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:43:46.874957 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 05:43:46.924000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 05:43:46.925539 augenrules[1805]: No rules Jan 15 05:43:46.926562 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 05:43:46.926949 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 05:43:46.927791 kernel: kauditd_printk_skb: 130 callbacks suppressed Jan 15 05:43:46.927822 kernel: audit: type=1305 audit(1768455826.924:222): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 05:43:46.928754 sudo[1780]: pam_unix(sudo:session): session closed for user root Jan 15 05:43:46.930189 sshd[1779]: Connection closed by 10.0.0.1 port 33482 Jan 15 05:43:46.930687 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 15 05:43:46.924000 audit[1805]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc6cb41ac0 a2=420 a3=0 items=0 ppid=1786 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:46.943475 kernel: audit: type=1300 audit(1768455826.924:222): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc6cb41ac0 a2=420 a3=0 items=0 ppid=1786 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:46.943567 kernel: audit: type=1327 audit(1768455826.924:222): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:43:46.924000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:43:46.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:46.955538 kernel: audit: type=1130 audit(1768455826.926:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:46.955575 kernel: audit: type=1131 audit(1768455826.926:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:46.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:46.928000 audit[1780]: USER_END pid=1780 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:43:46.970905 kernel: audit: type=1106 audit(1768455826.928:225): pid=1780 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:43:46.970935 kernel: audit: type=1104 audit(1768455826.928:226): pid=1780 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:43:46.928000 audit[1780]: CRED_DISP pid=1780 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:43:46.978504 kernel: audit: type=1106 audit(1768455826.931:227): pid=1775 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:43:46.931000 audit[1775]: USER_END pid=1775 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:43:46.931000 audit[1775]: CRED_DISP pid=1775 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:43:46.999381 kernel: audit: type=1104 audit(1768455826.931:228): pid=1775 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:43:47.004006 systemd[1]: sshd@5-10.0.0.123:22-10.0.0.1:33482.service: Deactivated successfully. Jan 15 05:43:47.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.123:22-10.0.0.1:33482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:47.005820 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 05:43:47.006818 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Jan 15 05:43:47.009313 systemd[1]: Started sshd@6-10.0.0.123:22-10.0.0.1:33486.service - OpenSSH per-connection server daemon (10.0.0.1:33486). Jan 15 05:43:47.010043 systemd-logind[1578]: Removed session 7. Jan 15 05:43:47.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.123:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:47.013490 kernel: audit: type=1131 audit(1768455827.003:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.123:22-10.0.0.1:33482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:47.063000 audit[1814]: USER_ACCT pid=1814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:43:47.063742 sshd[1814]: Accepted publickey for core from 10.0.0.1 port 33486 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:43:47.064000 audit[1814]: CRED_ACQ pid=1814 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:43:47.064000 audit[1814]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf201f1a0 a2=3 a3=0 items=0 ppid=1 pid=1814 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:47.064000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:43:47.065288 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:43:47.071148 systemd-logind[1578]: New session 8 of user core. Jan 15 05:43:47.080667 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 05:43:47.083000 audit[1814]: USER_START pid=1814 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:43:47.085000 audit[1819]: CRED_ACQ pid=1819 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:43:47.097000 audit[1820]: USER_ACCT pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:43:47.098157 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 05:43:47.097000 audit[1820]: CRED_REFR pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:43:47.098672 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:43:47.098000 audit[1820]: USER_START pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:43:47.431936 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 05:43:47.448773 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 05:43:47.730005 dockerd[1841]: time="2026-01-15T05:43:47.729789378Z" level=info msg="Starting up" Jan 15 05:43:47.731217 dockerd[1841]: time="2026-01-15T05:43:47.731182128Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 05:43:47.746524 dockerd[1841]: time="2026-01-15T05:43:47.746467056Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 05:43:47.973782 dockerd[1841]: time="2026-01-15T05:43:47.973541475Z" level=info msg="Loading containers: start." Jan 15 05:43:47.988501 kernel: Initializing XFRM netlink socket Jan 15 05:43:48.084000 audit[1895]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.084000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd651237f0 a2=0 a3=0 items=0 ppid=1841 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 05:43:48.089000 audit[1897]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.089000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcb3911d80 a2=0 a3=0 items=0 ppid=1841 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 05:43:48.093000 audit[1899]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.093000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe00d4d8b0 a2=0 a3=0 items=0 ppid=1841 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.093000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 05:43:48.097000 audit[1901]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.097000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe723a9370 a2=0 a3=0 items=0 ppid=1841 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.097000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 05:43:48.101000 audit[1903]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.101000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd231bc310 a2=0 a3=0 items=0 ppid=1841 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 05:43:48.105000 audit[1905]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.105000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc0f6dc80 a2=0 a3=0 items=0 ppid=1841 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.105000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:43:48.109000 audit[1907]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.109000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4530f750 a2=0 a3=0 items=0 ppid=1841 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:43:48.118000 audit[1909]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.118000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc70677900 a2=0 a3=0 items=0 ppid=1841 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 05:43:48.175000 audit[1912]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.175000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff3ab25840 a2=0 a3=0 items=0 ppid=1841 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 15 05:43:48.180000 audit[1914]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.180000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd6216cbd0 a2=0 a3=0 items=0 ppid=1841 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 05:43:48.184000 audit[1916]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.184000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcb55bcdd0 a2=0 a3=0 items=0 ppid=1841 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 05:43:48.189000 audit[1918]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.189000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe5dce5900 a2=0 a3=0 items=0 ppid=1841 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:43:48.193000 audit[1920]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.193000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffea95466d0 a2=0 a3=0 items=0 ppid=1841 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 05:43:48.269000 audit[1950]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.269000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe97fd94d0 a2=0 a3=0 items=0 ppid=1841 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.269000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 05:43:48.273000 audit[1952]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.273000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdbbb24e30 a2=0 a3=0 items=0 ppid=1841 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.273000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 05:43:48.277000 audit[1954]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.277000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea5a36b10 a2=0 a3=0 items=0 ppid=1841 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 05:43:48.281000 audit[1956]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.281000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdac3d3400 a2=0 a3=0 items=0 ppid=1841 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 05:43:48.286000 audit[1958]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.286000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd179af810 a2=0 a3=0 items=0 ppid=1841 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.286000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 05:43:48.290000 audit[1960]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.290000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd3f6ef260 a2=0 a3=0 items=0 ppid=1841 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.290000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:43:48.295000 audit[1962]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.295000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc32fec290 a2=0 a3=0 items=0 ppid=1841 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.295000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:43:48.299000 audit[1964]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.299000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc802684e0 a2=0 a3=0 items=0 ppid=1841 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 05:43:48.305000 audit[1966]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.305000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe3d21a0c0 a2=0 a3=0 items=0 ppid=1841 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.305000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 15 05:43:48.309000 audit[1968]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.309000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff231500b0 a2=0 a3=0 items=0 ppid=1841 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 05:43:48.314000 audit[1970]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.314000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffa52dbb00 a2=0 a3=0 items=0 ppid=1841 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.314000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 05:43:48.320000 audit[1972]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.320000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc76dbaae0 a2=0 a3=0 items=0 ppid=1841 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:43:48.325000 audit[1974]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.325000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd9a4f08d0 a2=0 a3=0 items=0 ppid=1841 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.325000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 05:43:48.338000 audit[1979]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.338000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4d9bc9c0 a2=0 a3=0 items=0 ppid=1841 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 05:43:48.345000 audit[1981]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.345000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc53387aa0 a2=0 a3=0 items=0 ppid=1841 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 05:43:48.350000 audit[1983]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.350000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcff11d8a0 a2=0 a3=0 items=0 ppid=1841 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.350000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 05:43:48.354000 audit[1985]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.354000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffecfc201d0 a2=0 a3=0 items=0 ppid=1841 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.354000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 05:43:48.358000 audit[1987]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.358000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdf9e5fa80 a2=0 a3=0 items=0 ppid=1841 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 05:43:48.363000 audit[1989]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:43:48.363000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff6b33fea0 a2=0 a3=0 items=0 ppid=1841 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 05:43:48.385000 audit[1993]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.385000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffec8549570 a2=0 a3=0 items=0 ppid=1841 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.385000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 15 05:43:48.390000 audit[1995]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.390000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffedc963cc0 a2=0 a3=0 items=0 ppid=1841 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.390000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 15 05:43:48.407000 audit[2003]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.407000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc70783fa0 a2=0 a3=0 items=0 ppid=1841 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.407000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 15 05:43:48.425000 audit[2009]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.425000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc33eb0180 a2=0 a3=0 items=0 ppid=1841 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.425000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 15 05:43:48.429000 audit[2011]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.429000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc4a791e00 a2=0 a3=0 items=0 ppid=1841 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.429000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 15 05:43:48.433000 audit[2013]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.433000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd39a67860 a2=0 a3=0 items=0 ppid=1841 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.433000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 15 05:43:48.437000 audit[2015]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.437000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff8156bcb0 a2=0 a3=0 items=0 ppid=1841 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.437000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:43:48.442000 audit[2017]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:43:48.442000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd2dc9d5d0 a2=0 a3=0 items=0 ppid=1841 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:43:48.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 15 05:43:48.443989 systemd-networkd[1504]: docker0: Link UP Jan 15 05:43:48.451056 dockerd[1841]: time="2026-01-15T05:43:48.450876380Z" level=info msg="Loading containers: done." Jan 15 05:43:48.477747 dockerd[1841]: time="2026-01-15T05:43:48.477641659Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 05:43:48.477747 dockerd[1841]: time="2026-01-15T05:43:48.477736336Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 05:43:48.477917 dockerd[1841]: time="2026-01-15T05:43:48.477820413Z" level=info msg="Initializing buildkit" Jan 15 05:43:48.522780 dockerd[1841]: time="2026-01-15T05:43:48.522682621Z" level=info msg="Completed buildkit initialization" Jan 15 05:43:48.530988 dockerd[1841]: time="2026-01-15T05:43:48.530885259Z" level=info msg="Daemon has completed initialization" Jan 15 05:43:48.530988 dockerd[1841]: time="2026-01-15T05:43:48.530971941Z" level=info msg="API listen on /run/docker.sock" Jan 15 05:43:48.531213 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 05:43:48.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:48.764913 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck133604333-merged.mount: Deactivated successfully. Jan 15 05:43:49.264082 containerd[1603]: time="2026-01-15T05:43:49.264013380Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 15 05:43:49.772863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709170432.mount: Deactivated successfully. Jan 15 05:43:51.231645 containerd[1603]: time="2026-01-15T05:43:51.231578284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:51.232532 containerd[1603]: time="2026-01-15T05:43:51.232401572Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 15 05:43:51.233811 containerd[1603]: time="2026-01-15T05:43:51.233747528Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:51.237503 containerd[1603]: time="2026-01-15T05:43:51.237309034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:51.238780 containerd[1603]: time="2026-01-15T05:43:51.238706993Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.974662345s" Jan 15 05:43:51.238780 containerd[1603]: time="2026-01-15T05:43:51.238765262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 15 05:43:51.239586 containerd[1603]: time="2026-01-15T05:43:51.239510704Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 15 05:43:52.873739 containerd[1603]: time="2026-01-15T05:43:52.873651548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:52.874857 containerd[1603]: time="2026-01-15T05:43:52.874767913Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 15 05:43:52.876121 containerd[1603]: time="2026-01-15T05:43:52.876051609Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:52.880614 containerd[1603]: time="2026-01-15T05:43:52.880580030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:52.881698 containerd[1603]: time="2026-01-15T05:43:52.881621795Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.642055477s" Jan 15 05:43:52.881698 containerd[1603]: time="2026-01-15T05:43:52.881669834Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 15 05:43:52.882209 containerd[1603]: time="2026-01-15T05:43:52.882152294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 15 05:43:54.396910 containerd[1603]: time="2026-01-15T05:43:54.396704464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:54.398237 containerd[1603]: time="2026-01-15T05:43:54.398107461Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 15 05:43:54.399581 containerd[1603]: time="2026-01-15T05:43:54.399493008Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:54.402831 containerd[1603]: time="2026-01-15T05:43:54.402748866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:54.403735 containerd[1603]: time="2026-01-15T05:43:54.403648719Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.521432436s" Jan 15 05:43:54.403735 containerd[1603]: time="2026-01-15T05:43:54.403718810Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 15 05:43:54.404649 containerd[1603]: time="2026-01-15T05:43:54.404281911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 15 05:43:55.324199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630160250.mount: Deactivated successfully. Jan 15 05:43:55.969156 containerd[1603]: time="2026-01-15T05:43:55.969036309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:55.970557 containerd[1603]: time="2026-01-15T05:43:55.970299237Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 15 05:43:55.971836 containerd[1603]: time="2026-01-15T05:43:55.971744588Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:55.974497 containerd[1603]: time="2026-01-15T05:43:55.974232690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:55.975473 containerd[1603]: time="2026-01-15T05:43:55.975284647Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.570966909s" Jan 15 05:43:55.975473 containerd[1603]: time="2026-01-15T05:43:55.975347945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 15 05:43:55.976176 containerd[1603]: time="2026-01-15T05:43:55.976125462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 15 05:43:55.979178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 05:43:55.981721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:43:56.217860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:43:56.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:56.221183 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 15 05:43:56.221255 kernel: audit: type=1130 audit(1768455836.217:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:43:56.247957 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:43:56.303264 kubelet[2144]: E0115 05:43:56.303221 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:43:56.310659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:43:56.310883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:43:56.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:43:56.311464 systemd[1]: kubelet.service: Consumed 262ms CPU time, 111.6M memory peak. Jan 15 05:43:56.323504 kernel: audit: type=1131 audit(1768455836.310:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:43:56.523693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278703284.mount: Deactivated successfully. Jan 15 05:43:57.388793 containerd[1603]: time="2026-01-15T05:43:57.388695254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:57.389892 containerd[1603]: time="2026-01-15T05:43:57.389811108Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 15 05:43:57.391556 containerd[1603]: time="2026-01-15T05:43:57.391502755Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:57.394375 containerd[1603]: time="2026-01-15T05:43:57.394305808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:43:57.395123 containerd[1603]: time="2026-01-15T05:43:57.395074603Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.418895392s" Jan 15 05:43:57.395123 containerd[1603]: time="2026-01-15T05:43:57.395102605Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 15 05:43:57.395819 containerd[1603]: time="2026-01-15T05:43:57.395758340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 05:43:57.773263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653761481.mount: Deactivated successfully. Jan 15 05:43:57.779924 containerd[1603]: time="2026-01-15T05:43:57.779808254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:43:57.780878 containerd[1603]: time="2026-01-15T05:43:57.780797539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 15 05:43:57.782066 containerd[1603]: time="2026-01-15T05:43:57.781999289Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:43:57.784352 containerd[1603]: time="2026-01-15T05:43:57.784283356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:43:57.784949 containerd[1603]: time="2026-01-15T05:43:57.784884344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 389.097691ms" Jan 15 05:43:57.784949 containerd[1603]: time="2026-01-15T05:43:57.784933365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 15 05:43:57.785690 containerd[1603]: time="2026-01-15T05:43:57.785513814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 15 05:43:58.222101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368841041.mount: Deactivated successfully. Jan 15 05:44:00.368726 kernel: hrtimer: interrupt took 2326594 ns Jan 15 05:44:01.527852 containerd[1603]: time="2026-01-15T05:44:01.527669489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:01.529119 containerd[1603]: time="2026-01-15T05:44:01.529067920Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 15 05:44:01.530534 containerd[1603]: time="2026-01-15T05:44:01.530490095Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:01.533599 containerd[1603]: time="2026-01-15T05:44:01.533498803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:01.534273 containerd[1603]: time="2026-01-15T05:44:01.534235608Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.748693021s" Jan 15 05:44:01.534323 containerd[1603]: time="2026-01-15T05:44:01.534275964Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 15 05:44:04.049568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:44:04.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:04.049916 systemd[1]: kubelet.service: Consumed 262ms CPU time, 111.6M memory peak. Jan 15 05:44:04.052860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:44:04.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:04.069336 kernel: audit: type=1130 audit(1768455844.049:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:04.069503 kernel: audit: type=1131 audit(1768455844.049:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:04.091392 systemd[1]: Reload requested from client PID 2291 ('systemctl') (unit session-8.scope)... Jan 15 05:44:04.091572 systemd[1]: Reloading... Jan 15 05:44:04.225503 zram_generator::config[2338]: No configuration found. Jan 15 05:44:04.500555 systemd[1]: Reloading finished in 408 ms. Jan 15 05:44:04.531000 audit: BPF prog-id=63 op=LOAD Jan 15 05:44:04.534498 kernel: audit: type=1334 audit(1768455844.531:284): prog-id=63 op=LOAD Jan 15 05:44:04.531000 audit: BPF prog-id=49 op=UNLOAD Jan 15 05:44:04.531000 audit: BPF prog-id=64 op=LOAD Jan 15 05:44:04.539960 kernel: audit: type=1334 audit(1768455844.531:285): prog-id=49 op=UNLOAD Jan 15 05:44:04.540018 kernel: audit: type=1334 audit(1768455844.531:286): prog-id=64 op=LOAD Jan 15 05:44:04.540053 kernel: audit: type=1334 audit(1768455844.531:287): prog-id=65 op=LOAD Jan 15 05:44:04.531000 audit: BPF prog-id=65 op=LOAD Jan 15 05:44:04.531000 audit: BPF prog-id=50 op=UNLOAD Jan 15 05:44:04.545139 kernel: audit: type=1334 audit(1768455844.531:288): prog-id=50 op=UNLOAD Jan 15 05:44:04.545174 kernel: audit: type=1334 audit(1768455844.531:289): prog-id=51 op=UNLOAD Jan 15 05:44:04.531000 audit: BPF prog-id=51 op=UNLOAD Jan 15 05:44:04.547744 kernel: audit: type=1334 audit(1768455844.532:290): prog-id=66 op=LOAD Jan 15 05:44:04.532000 audit: BPF prog-id=66 op=LOAD Jan 15 05:44:04.532000 audit: BPF prog-id=43 op=UNLOAD Jan 15 05:44:04.552904 kernel: audit: type=1334 audit(1768455844.532:291): prog-id=43 op=UNLOAD Jan 15 05:44:04.533000 audit: BPF prog-id=67 op=LOAD Jan 15 05:44:04.533000 audit: BPF prog-id=68 op=LOAD Jan 15 05:44:04.533000 audit: BPF prog-id=47 op=UNLOAD Jan 15 05:44:04.533000 audit: BPF prog-id=48 op=UNLOAD Jan 15 05:44:04.535000 audit: BPF prog-id=69 op=LOAD Jan 15 05:44:04.535000 audit: BPF prog-id=52 op=UNLOAD Jan 15 05:44:04.535000 audit: BPF prog-id=70 op=LOAD Jan 15 05:44:04.536000 audit: BPF prog-id=71 op=LOAD Jan 15 05:44:04.536000 audit: BPF prog-id=53 op=UNLOAD Jan 15 05:44:04.536000 audit: BPF prog-id=54 op=UNLOAD Jan 15 05:44:04.536000 audit: BPF prog-id=72 op=LOAD Jan 15 05:44:04.536000 audit: BPF prog-id=44 op=UNLOAD Jan 15 05:44:04.537000 audit: BPF prog-id=73 op=LOAD Jan 15 05:44:04.537000 audit: BPF prog-id=74 op=LOAD Jan 15 05:44:04.537000 audit: BPF prog-id=45 op=UNLOAD Jan 15 05:44:04.537000 audit: BPF prog-id=46 op=UNLOAD Jan 15 05:44:04.564000 audit: BPF prog-id=75 op=LOAD Jan 15 05:44:04.564000 audit: BPF prog-id=60 op=UNLOAD Jan 15 05:44:04.564000 audit: BPF prog-id=76 op=LOAD Jan 15 05:44:04.564000 audit: BPF prog-id=77 op=LOAD Jan 15 05:44:04.564000 audit: BPF prog-id=61 op=UNLOAD Jan 15 05:44:04.564000 audit: BPF prog-id=62 op=UNLOAD Jan 15 05:44:04.565000 audit: BPF prog-id=78 op=LOAD Jan 15 05:44:04.565000 audit: BPF prog-id=58 op=UNLOAD Jan 15 05:44:04.566000 audit: BPF prog-id=79 op=LOAD Jan 15 05:44:04.566000 audit: BPF prog-id=59 op=UNLOAD Jan 15 05:44:04.566000 audit: BPF prog-id=80 op=LOAD Jan 15 05:44:04.567000 audit: BPF prog-id=55 op=UNLOAD Jan 15 05:44:04.567000 audit: BPF prog-id=81 op=LOAD Jan 15 05:44:04.567000 audit: BPF prog-id=82 op=LOAD Jan 15 05:44:04.567000 audit: BPF prog-id=56 op=UNLOAD Jan 15 05:44:04.567000 audit: BPF prog-id=57 op=UNLOAD Jan 15 05:44:04.591315 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 05:44:04.591558 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 05:44:04.591981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:44:04.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:44:04.592088 systemd[1]: kubelet.service: Consumed 193ms CPU time, 98.5M memory peak. Jan 15 05:44:04.594267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:44:04.825571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:44:04.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:04.843783 (kubelet)[2386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 05:44:04.923253 kubelet[2386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:44:04.923253 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 05:44:04.923253 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:44:04.923701 kubelet[2386]: I0115 05:44:04.923267 2386 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 05:44:05.294106 kubelet[2386]: I0115 05:44:05.294034 2386 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 05:44:05.294106 kubelet[2386]: I0115 05:44:05.294093 2386 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 05:44:05.294580 kubelet[2386]: I0115 05:44:05.294510 2386 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 05:44:05.321806 kubelet[2386]: E0115 05:44:05.321690 2386 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:05.322805 kubelet[2386]: I0115 05:44:05.322704 2386 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 05:44:05.332406 kubelet[2386]: I0115 05:44:05.332372 2386 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 05:44:05.339758 kubelet[2386]: I0115 05:44:05.339711 2386 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 05:44:05.341190 kubelet[2386]: I0115 05:44:05.341107 2386 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 05:44:05.341378 kubelet[2386]: I0115 05:44:05.341161 2386 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 05:44:05.341378 kubelet[2386]: I0115 05:44:05.341376 2386 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 05:44:05.341651 kubelet[2386]: I0115 05:44:05.341385 2386 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 05:44:05.341651 kubelet[2386]: I0115 05:44:05.341634 2386 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:44:05.345974 kubelet[2386]: I0115 05:44:05.345889 2386 kubelet.go:446] "Attempting to sync node with API server" Jan 15 05:44:05.345974 kubelet[2386]: I0115 05:44:05.345937 2386 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 05:44:05.345974 kubelet[2386]: I0115 05:44:05.345957 2386 kubelet.go:352] "Adding apiserver pod source" Jan 15 05:44:05.346161 kubelet[2386]: I0115 05:44:05.346058 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 05:44:05.351087 kubelet[2386]: I0115 05:44:05.351003 2386 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 05:44:05.351558 kubelet[2386]: I0115 05:44:05.351507 2386 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 05:44:05.352576 kubelet[2386]: W0115 05:44:05.352510 2386 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 05:44:05.355034 kubelet[2386]: I0115 05:44:05.354945 2386 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 05:44:05.355034 kubelet[2386]: I0115 05:44:05.354994 2386 server.go:1287] "Started kubelet" Jan 15 05:44:05.359298 kubelet[2386]: W0115 05:44:05.358706 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 15 05:44:05.359298 kubelet[2386]: E0115 05:44:05.358767 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:05.359298 kubelet[2386]: W0115 05:44:05.358908 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 15 05:44:05.359298 kubelet[2386]: E0115 05:44:05.358950 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:05.391340 kubelet[2386]: E0115 05:44:05.388877 2386 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.123:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.123:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ad138bf58bf66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 05:44:05.354979174 +0000 UTC m=+0.506299356,LastTimestamp:2026-01-15 05:44:05.354979174 +0000 UTC m=+0.506299356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 05:44:05.490625 kubelet[2386]: I0115 05:44:05.490491 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 05:44:05.492606 kubelet[2386]: I0115 05:44:05.492515 2386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 05:44:05.492666 kubelet[2386]: I0115 05:44:05.492608 2386 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 05:44:05.493023 kubelet[2386]: E0115 05:44:05.492979 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 05:44:05.493919 kubelet[2386]: I0115 05:44:05.493857 2386 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 05:44:05.494009 kubelet[2386]: I0115 05:44:05.493983 2386 reconciler.go:26] "Reconciler: start to sync state" Jan 15 05:44:05.494821 kubelet[2386]: W0115 05:44:05.494724 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 15 05:44:05.494821 kubelet[2386]: E0115 05:44:05.494832 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:05.495210 kubelet[2386]: E0115 05:44:05.495034 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="200ms" Jan 15 05:44:05.495558 kubelet[2386]: I0115 05:44:05.495348 2386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 05:44:05.496056 kubelet[2386]: I0115 05:44:05.495949 2386 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 05:44:05.496174 kubelet[2386]: I0115 05:44:05.496152 2386 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 05:44:05.502873 kubelet[2386]: I0115 05:44:05.502809 2386 server.go:479] "Adding debug handlers to kubelet server" Jan 15 05:44:05.504014 kubelet[2386]: I0115 05:44:05.503927 2386 factory.go:221] Registration of the systemd container factory successfully Jan 15 05:44:05.506223 kubelet[2386]: I0115 05:44:05.506098 2386 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 05:44:05.508228 kubelet[2386]: E0115 05:44:05.508114 2386 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 05:44:05.509000 audit[2400]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:05.509000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe6dc0b060 a2=0 a3=0 items=0 ppid=2386 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 05:44:05.510575 kubelet[2386]: I0115 05:44:05.510376 2386 factory.go:221] Registration of the containerd container factory successfully Jan 15 05:44:05.516000 audit[2401]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:05.516000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd59b83120 a2=0 a3=0 items=0 ppid=2386 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 05:44:05.523000 audit[2405]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:05.523000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd203c5bb0 a2=0 a3=0 items=0 ppid=2386 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:44:05.527000 audit[2407]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:05.527000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffc4fdb180 a2=0 a3=0 items=0 ppid=2386 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:44:05.538644 kubelet[2386]: I0115 05:44:05.538625 2386 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 05:44:05.538835 kubelet[2386]: I0115 05:44:05.538821 2386 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 05:44:05.539022 kubelet[2386]: I0115 05:44:05.539008 2386 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:44:05.539000 audit[2412]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:05.539000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffcbb4e560 a2=0 a3=0 items=0 ppid=2386 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 15 05:44:05.540669 kubelet[2386]: I0115 05:44:05.540035 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 05:44:05.541000 audit[2414]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:05.541000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff049968a0 a2=0 a3=0 items=0 ppid=2386 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 05:44:05.541000 audit[2413]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:05.541000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc1b9e5370 a2=0 a3=0 items=0 ppid=2386 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.541000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 05:44:05.542592 kubelet[2386]: I0115 05:44:05.542373 2386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 05:44:05.542592 kubelet[2386]: I0115 05:44:05.542539 2386 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 05:44:05.542702 kubelet[2386]: I0115 05:44:05.542632 2386 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 05:44:05.542702 kubelet[2386]: I0115 05:44:05.542640 2386 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 05:44:05.542754 kubelet[2386]: E0115 05:44:05.542709 2386 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 05:44:05.543563 kubelet[2386]: W0115 05:44:05.543305 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 15 05:44:05.543563 kubelet[2386]: E0115 05:44:05.543360 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:05.543000 audit[2416]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:05.543000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfc1b7810 a2=0 a3=0 items=0 ppid=2386 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 05:44:05.545000 audit[2417]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:05.545000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe79af2320 a2=0 a3=0 items=0 ppid=2386 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 05:44:05.545000 audit[2418]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:05.545000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1c1fb9c0 a2=0 a3=0 items=0 ppid=2386 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.545000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 05:44:05.548000 audit[2419]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:05.548000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc666fc5a0 a2=0 a3=0 items=0 ppid=2386 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 05:44:05.549000 audit[2420]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:05.549000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec2c151d0 a2=0 a3=0 items=0 ppid=2386 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:05.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 05:44:05.594299 kubelet[2386]: E0115 05:44:05.594184 2386 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 05:44:05.606535 kubelet[2386]: I0115 05:44:05.606364 2386 policy_none.go:49] "None policy: Start" Jan 15 05:44:05.606753 kubelet[2386]: I0115 05:44:05.606691 2386 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 05:44:05.606787 kubelet[2386]: I0115 05:44:05.606774 2386 state_mem.go:35] "Initializing new in-memory state store" Jan 15 05:44:05.631912 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 05:44:05.643472 kubelet[2386]: E0115 05:44:05.643370 2386 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 05:44:05.643707 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 05:44:05.647876 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 05:44:05.663796 kubelet[2386]: I0115 05:44:05.663725 2386 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 05:44:05.664015 kubelet[2386]: I0115 05:44:05.663947 2386 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 05:44:05.664015 kubelet[2386]: I0115 05:44:05.663965 2386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 05:44:05.664166 kubelet[2386]: I0115 05:44:05.664153 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 05:44:05.664923 kubelet[2386]: E0115 05:44:05.664852 2386 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 05:44:05.665055 kubelet[2386]: E0115 05:44:05.664931 2386 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 05:44:05.696520 kubelet[2386]: E0115 05:44:05.696317 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="400ms" Jan 15 05:44:05.766394 kubelet[2386]: I0115 05:44:05.766202 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:44:05.766573 kubelet[2386]: E0115 05:44:05.766554 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 15 05:44:05.855075 systemd[1]: Created slice kubepods-burstable-pod27195ebb2ba3f55722dd6baf2c99a444.slice - libcontainer container kubepods-burstable-pod27195ebb2ba3f55722dd6baf2c99a444.slice. Jan 15 05:44:05.873494 kubelet[2386]: E0115 05:44:05.873378 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:05.877174 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 15 05:44:05.894173 kubelet[2386]: E0115 05:44:05.894120 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:05.895542 kubelet[2386]: I0115 05:44:05.895517 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27195ebb2ba3f55722dd6baf2c99a444-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27195ebb2ba3f55722dd6baf2c99a444\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:05.895542 kubelet[2386]: I0115 05:44:05.895541 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:05.895659 kubelet[2386]: I0115 05:44:05.895559 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:05.895659 kubelet[2386]: I0115 05:44:05.895573 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:05.898107 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 15 05:44:05.901043 kubelet[2386]: E0115 05:44:05.900983 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:05.904394 kubelet[2386]: I0115 05:44:05.904193 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:05.904394 kubelet[2386]: I0115 05:44:05.904241 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27195ebb2ba3f55722dd6baf2c99a444-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27195ebb2ba3f55722dd6baf2c99a444\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:05.904394 kubelet[2386]: I0115 05:44:05.904264 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27195ebb2ba3f55722dd6baf2c99a444-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27195ebb2ba3f55722dd6baf2c99a444\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:05.904394 kubelet[2386]: I0115 05:44:05.904327 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:05.904394 kubelet[2386]: I0115 05:44:05.904342 2386 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 15 05:44:05.970050 kubelet[2386]: I0115 05:44:05.969952 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:44:05.970553 kubelet[2386]: E0115 05:44:05.970519 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 15 05:44:06.102624 kubelet[2386]: E0115 05:44:06.102260 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="800ms" Jan 15 05:44:06.175667 kubelet[2386]: E0115 05:44:06.175313 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:06.177620 containerd[1603]: time="2026-01-15T05:44:06.177526995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27195ebb2ba3f55722dd6baf2c99a444,Namespace:kube-system,Attempt:0,}" Jan 15 05:44:06.196113 kubelet[2386]: E0115 05:44:06.195946 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:06.197021 containerd[1603]: time="2026-01-15T05:44:06.196934681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 15 05:44:06.201998 kubelet[2386]: E0115 05:44:06.201979 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:06.202747 containerd[1603]: time="2026-01-15T05:44:06.202699941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 15 05:44:06.262956 containerd[1603]: time="2026-01-15T05:44:06.262848365Z" level=info msg="connecting to shim 404d74cf5643a276c59a830afe3f08e8ffb4d0b3117eca4806cfd2000c6559c9" address="unix:///run/containerd/s/4f6b0deb28c3f09401560befd0e8b31e29522fc22b42c93b679756876a6bb205" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:06.270750 containerd[1603]: time="2026-01-15T05:44:06.270546116Z" level=info msg="connecting to shim 68349c51f9390e3ca525b24ff556742a271f7bfd1d37171cf291855b3add27f5" address="unix:///run/containerd/s/43cc40af6cc8991d88c2599f02742eafa358805fbf51f4a31c3f482391f86a3c" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:06.294564 containerd[1603]: time="2026-01-15T05:44:06.294525503Z" level=info msg="connecting to shim 386ccb51052a7229baf12f11512767a190bda5337eacac0fbe1fbd0612bab7bf" address="unix:///run/containerd/s/264de0f1f1a999ed11f3c0e4b31b7b344058441ab21809169a6f086fce45e152" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:06.306869 kubelet[2386]: W0115 05:44:06.306677 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 15 05:44:06.307028 kubelet[2386]: E0115 05:44:06.306855 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:06.330664 systemd[1]: Started cri-containerd-68349c51f9390e3ca525b24ff556742a271f7bfd1d37171cf291855b3add27f5.scope - libcontainer container 68349c51f9390e3ca525b24ff556742a271f7bfd1d37171cf291855b3add27f5. Jan 15 05:44:06.508405 kubelet[2386]: I0115 05:44:06.508302 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:44:06.509343 kubelet[2386]: E0115 05:44:06.509265 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 15 05:44:06.530000 audit: BPF prog-id=83 op=LOAD Jan 15 05:44:06.531000 audit: BPF prog-id=84 op=LOAD Jan 15 05:44:06.531000 audit[2455]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2441 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333439633531663933393065336361353235623234666635353637 Jan 15 05:44:06.532000 audit: BPF prog-id=84 op=UNLOAD Jan 15 05:44:06.532000 audit[2455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2441 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333439633531663933393065336361353235623234666635353637 Jan 15 05:44:06.532000 audit: BPF prog-id=85 op=LOAD Jan 15 05:44:06.532000 audit[2455]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2441 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333439633531663933393065336361353235623234666635353637 Jan 15 05:44:06.532000 audit: BPF prog-id=86 op=LOAD Jan 15 05:44:06.532000 audit[2455]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2441 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333439633531663933393065336361353235623234666635353637 Jan 15 05:44:06.533000 audit: BPF prog-id=86 op=UNLOAD Jan 15 05:44:06.533000 audit[2455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2441 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333439633531663933393065336361353235623234666635353637 Jan 15 05:44:06.533000 audit: BPF prog-id=85 op=UNLOAD Jan 15 05:44:06.533000 audit[2455]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2441 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333439633531663933393065336361353235623234666635353637 Jan 15 05:44:06.533000 audit: BPF prog-id=87 op=LOAD Jan 15 05:44:06.533000 audit[2455]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2441 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638333439633531663933393065336361353235623234666635353637 Jan 15 05:44:06.561681 systemd[1]: Started cri-containerd-404d74cf5643a276c59a830afe3f08e8ffb4d0b3117eca4806cfd2000c6559c9.scope - libcontainer container 404d74cf5643a276c59a830afe3f08e8ffb4d0b3117eca4806cfd2000c6559c9. Jan 15 05:44:06.579178 systemd[1]: Started cri-containerd-386ccb51052a7229baf12f11512767a190bda5337eacac0fbe1fbd0612bab7bf.scope - libcontainer container 386ccb51052a7229baf12f11512767a190bda5337eacac0fbe1fbd0612bab7bf. Jan 15 05:44:06.608000 audit: BPF prog-id=88 op=LOAD Jan 15 05:44:06.609000 audit: BPF prog-id=89 op=LOAD Jan 15 05:44:06.609000 audit[2485]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2437 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346437346366353634336132373663353961383330616665336630 Jan 15 05:44:06.609000 audit: BPF prog-id=89 op=UNLOAD Jan 15 05:44:06.609000 audit[2485]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2437 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346437346366353634336132373663353961383330616665336630 Jan 15 05:44:06.610000 audit: BPF prog-id=90 op=LOAD Jan 15 05:44:06.610000 audit[2485]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2437 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346437346366353634336132373663353961383330616665336630 Jan 15 05:44:06.610000 audit: BPF prog-id=91 op=LOAD Jan 15 05:44:06.610000 audit[2485]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2437 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346437346366353634336132373663353961383330616665336630 Jan 15 05:44:06.611000 audit: BPF prog-id=91 op=UNLOAD Jan 15 05:44:06.611000 audit[2485]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2437 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346437346366353634336132373663353961383330616665336630 Jan 15 05:44:06.612000 audit: BPF prog-id=90 op=UNLOAD Jan 15 05:44:06.612000 audit[2485]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2437 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346437346366353634336132373663353961383330616665336630 Jan 15 05:44:06.613000 audit: BPF prog-id=92 op=LOAD Jan 15 05:44:06.613000 audit[2485]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2437 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430346437346366353634336132373663353961383330616665336630 Jan 15 05:44:06.625193 kubelet[2386]: W0115 05:44:06.625149 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 15 05:44:06.625527 kubelet[2386]: E0115 05:44:06.625205 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:06.630000 audit: BPF prog-id=93 op=LOAD Jan 15 05:44:06.631000 audit: BPF prog-id=94 op=LOAD Jan 15 05:44:06.631000 audit[2499]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2464 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366363623531303532613732323962616631326631313531323736 Jan 15 05:44:06.631000 audit: BPF prog-id=94 op=UNLOAD Jan 15 05:44:06.631000 audit[2499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366363623531303532613732323962616631326631313531323736 Jan 15 05:44:06.631000 audit: BPF prog-id=95 op=LOAD Jan 15 05:44:06.631000 audit[2499]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2464 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366363623531303532613732323962616631326631313531323736 Jan 15 05:44:06.631000 audit: BPF prog-id=96 op=LOAD Jan 15 05:44:06.631000 audit[2499]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2464 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366363623531303532613732323962616631326631313531323736 Jan 15 05:44:06.631000 audit: BPF prog-id=96 op=UNLOAD Jan 15 05:44:06.631000 audit[2499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366363623531303532613732323962616631326631313531323736 Jan 15 05:44:06.631000 audit: BPF prog-id=95 op=UNLOAD Jan 15 05:44:06.631000 audit[2499]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366363623531303532613732323962616631326631313531323736 Jan 15 05:44:06.632000 audit: BPF prog-id=97 op=LOAD Jan 15 05:44:06.632000 audit[2499]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2464 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338366363623531303532613732323962616631326631313531323736 Jan 15 05:44:06.658899 kubelet[2386]: W0115 05:44:06.658651 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 15 05:44:06.659230 kubelet[2386]: E0115 05:44:06.659153 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:06.660318 containerd[1603]: time="2026-01-15T05:44:06.660161710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"68349c51f9390e3ca525b24ff556742a271f7bfd1d37171cf291855b3add27f5\"" Jan 15 05:44:06.663318 kubelet[2386]: E0115 05:44:06.663263 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:06.667480 containerd[1603]: time="2026-01-15T05:44:06.667380627Z" level=info msg="CreateContainer within sandbox \"68349c51f9390e3ca525b24ff556742a271f7bfd1d37171cf291855b3add27f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 05:44:06.678747 containerd[1603]: time="2026-01-15T05:44:06.678287339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:27195ebb2ba3f55722dd6baf2c99a444,Namespace:kube-system,Attempt:0,} returns sandbox id \"404d74cf5643a276c59a830afe3f08e8ffb4d0b3117eca4806cfd2000c6559c9\"" Jan 15 05:44:06.679321 kubelet[2386]: E0115 05:44:06.679288 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:06.683371 containerd[1603]: time="2026-01-15T05:44:06.682935256Z" level=info msg="Container c1b3efe5b396f34b24a7097cd93549f952c6ffcf3141751d52be2213ecb73386: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:06.683943 containerd[1603]: time="2026-01-15T05:44:06.683795751Z" level=info msg="CreateContainer within sandbox \"404d74cf5643a276c59a830afe3f08e8ffb4d0b3117eca4806cfd2000c6559c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 05:44:06.696384 containerd[1603]: time="2026-01-15T05:44:06.696200041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"386ccb51052a7229baf12f11512767a190bda5337eacac0fbe1fbd0612bab7bf\"" Jan 15 05:44:06.696956 containerd[1603]: time="2026-01-15T05:44:06.696840425Z" level=info msg="CreateContainer within sandbox \"68349c51f9390e3ca525b24ff556742a271f7bfd1d37171cf291855b3add27f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c1b3efe5b396f34b24a7097cd93549f952c6ffcf3141751d52be2213ecb73386\"" Jan 15 05:44:06.697578 kubelet[2386]: E0115 05:44:06.697228 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:06.699318 containerd[1603]: time="2026-01-15T05:44:06.699235451Z" level=info msg="StartContainer for \"c1b3efe5b396f34b24a7097cd93549f952c6ffcf3141751d52be2213ecb73386\"" Jan 15 05:44:06.700573 containerd[1603]: time="2026-01-15T05:44:06.700358750Z" level=info msg="CreateContainer within sandbox \"386ccb51052a7229baf12f11512767a190bda5337eacac0fbe1fbd0612bab7bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 05:44:06.701154 containerd[1603]: time="2026-01-15T05:44:06.701115202Z" level=info msg="connecting to shim c1b3efe5b396f34b24a7097cd93549f952c6ffcf3141751d52be2213ecb73386" address="unix:///run/containerd/s/43cc40af6cc8991d88c2599f02742eafa358805fbf51f4a31c3f482391f86a3c" protocol=ttrpc version=3 Jan 15 05:44:06.706876 containerd[1603]: time="2026-01-15T05:44:06.706805331Z" level=info msg="Container 4300301e4cc0268deb5ad32f8124f4c2af74ecc6a2da543f31e8e6c26758629f: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:06.717398 containerd[1603]: time="2026-01-15T05:44:06.717256011Z" level=info msg="CreateContainer within sandbox \"404d74cf5643a276c59a830afe3f08e8ffb4d0b3117eca4806cfd2000c6559c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4300301e4cc0268deb5ad32f8124f4c2af74ecc6a2da543f31e8e6c26758629f\"" Jan 15 05:44:06.719093 containerd[1603]: time="2026-01-15T05:44:06.719067276Z" level=info msg="StartContainer for \"4300301e4cc0268deb5ad32f8124f4c2af74ecc6a2da543f31e8e6c26758629f\"" Jan 15 05:44:06.720811 containerd[1603]: time="2026-01-15T05:44:06.720713751Z" level=info msg="Container 424dd5f31de03a6175bdac7e89c6a358d2cdcc4d0cbc836c94d708ab09b79b4f: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:06.721100 containerd[1603]: time="2026-01-15T05:44:06.721071929Z" level=info msg="connecting to shim 4300301e4cc0268deb5ad32f8124f4c2af74ecc6a2da543f31e8e6c26758629f" address="unix:///run/containerd/s/4f6b0deb28c3f09401560befd0e8b31e29522fc22b42c93b679756876a6bb205" protocol=ttrpc version=3 Jan 15 05:44:06.733920 containerd[1603]: time="2026-01-15T05:44:06.733882485Z" level=info msg="CreateContainer within sandbox \"386ccb51052a7229baf12f11512767a190bda5337eacac0fbe1fbd0612bab7bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"424dd5f31de03a6175bdac7e89c6a358d2cdcc4d0cbc836c94d708ab09b79b4f\"" Jan 15 05:44:06.735005 kubelet[2386]: W0115 05:44:06.734783 2386 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.123:6443: connect: connection refused Jan 15 05:44:06.735005 kubelet[2386]: E0115 05:44:06.734869 2386 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="UnhandledError" Jan 15 05:44:06.735963 containerd[1603]: time="2026-01-15T05:44:06.735867412Z" level=info msg="StartContainer for \"424dd5f31de03a6175bdac7e89c6a358d2cdcc4d0cbc836c94d708ab09b79b4f\"" Jan 15 05:44:06.737603 containerd[1603]: time="2026-01-15T05:44:06.737514958Z" level=info msg="connecting to shim 424dd5f31de03a6175bdac7e89c6a358d2cdcc4d0cbc836c94d708ab09b79b4f" address="unix:///run/containerd/s/264de0f1f1a999ed11f3c0e4b31b7b344058441ab21809169a6f086fce45e152" protocol=ttrpc version=3 Jan 15 05:44:06.752727 systemd[1]: Started cri-containerd-4300301e4cc0268deb5ad32f8124f4c2af74ecc6a2da543f31e8e6c26758629f.scope - libcontainer container 4300301e4cc0268deb5ad32f8124f4c2af74ecc6a2da543f31e8e6c26758629f. Jan 15 05:44:06.767678 systemd[1]: Started cri-containerd-424dd5f31de03a6175bdac7e89c6a358d2cdcc4d0cbc836c94d708ab09b79b4f.scope - libcontainer container 424dd5f31de03a6175bdac7e89c6a358d2cdcc4d0cbc836c94d708ab09b79b4f. Jan 15 05:44:06.788755 systemd[1]: Started cri-containerd-c1b3efe5b396f34b24a7097cd93549f952c6ffcf3141751d52be2213ecb73386.scope - libcontainer container c1b3efe5b396f34b24a7097cd93549f952c6ffcf3141751d52be2213ecb73386. Jan 15 05:44:06.931000 audit: BPF prog-id=98 op=LOAD Jan 15 05:44:06.937032 kubelet[2386]: E0115 05:44:06.935590 2386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="1.6s" Jan 15 05:44:06.938000 audit: BPF prog-id=99 op=LOAD Jan 15 05:44:06.938000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2437 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433303033303165346363303236386465623561643332663831323466 Jan 15 05:44:06.938000 audit: BPF prog-id=99 op=UNLOAD Jan 15 05:44:06.938000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2437 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433303033303165346363303236386465623561643332663831323466 Jan 15 05:44:06.939000 audit: BPF prog-id=100 op=LOAD Jan 15 05:44:06.939000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2437 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433303033303165346363303236386465623561643332663831323466 Jan 15 05:44:06.939000 audit: BPF prog-id=101 op=LOAD Jan 15 05:44:06.939000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2437 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433303033303165346363303236386465623561643332663831323466 Jan 15 05:44:06.939000 audit: BPF prog-id=101 op=UNLOAD Jan 15 05:44:06.939000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2437 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433303033303165346363303236386465623561643332663831323466 Jan 15 05:44:06.939000 audit: BPF prog-id=100 op=UNLOAD Jan 15 05:44:06.939000 audit[2561]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2437 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433303033303165346363303236386465623561643332663831323466 Jan 15 05:44:06.939000 audit: BPF prog-id=102 op=LOAD Jan 15 05:44:06.939000 audit[2561]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2437 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433303033303165346363303236386465623561643332663831323466 Jan 15 05:44:06.955000 audit: BPF prog-id=103 op=LOAD Jan 15 05:44:06.956000 audit: BPF prog-id=104 op=LOAD Jan 15 05:44:06.956000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2464 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346464356633316465303361363137356264616337653839633661 Jan 15 05:44:06.956000 audit: BPF prog-id=104 op=UNLOAD Jan 15 05:44:06.956000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346464356633316465303361363137356264616337653839633661 Jan 15 05:44:06.956000 audit: BPF prog-id=105 op=LOAD Jan 15 05:44:06.956000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2464 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346464356633316465303361363137356264616337653839633661 Jan 15 05:44:06.956000 audit: BPF prog-id=106 op=LOAD Jan 15 05:44:06.956000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2464 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346464356633316465303361363137356264616337653839633661 Jan 15 05:44:06.956000 audit: BPF prog-id=106 op=UNLOAD Jan 15 05:44:06.956000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346464356633316465303361363137356264616337653839633661 Jan 15 05:44:06.957000 audit: BPF prog-id=105 op=UNLOAD Jan 15 05:44:06.957000 audit[2578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2464 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346464356633316465303361363137356264616337653839633661 Jan 15 05:44:06.957000 audit: BPF prog-id=107 op=LOAD Jan 15 05:44:06.957000 audit[2578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2464 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432346464356633316465303361363137356264616337653839633661 Jan 15 05:44:06.988000 audit: BPF prog-id=108 op=LOAD Jan 15 05:44:06.990000 audit: BPF prog-id=109 op=LOAD Jan 15 05:44:06.990000 audit[2558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2441 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331623365666535623339366633346232346137303937636439333534 Jan 15 05:44:06.990000 audit: BPF prog-id=109 op=UNLOAD Jan 15 05:44:06.990000 audit[2558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2441 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331623365666535623339366633346232346137303937636439333534 Jan 15 05:44:06.990000 audit: BPF prog-id=110 op=LOAD Jan 15 05:44:06.990000 audit[2558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2441 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.990000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331623365666535623339366633346232346137303937636439333534 Jan 15 05:44:06.991000 audit: BPF prog-id=111 op=LOAD Jan 15 05:44:06.991000 audit[2558]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2441 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331623365666535623339366633346232346137303937636439333534 Jan 15 05:44:06.991000 audit: BPF prog-id=111 op=UNLOAD Jan 15 05:44:06.991000 audit[2558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2441 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331623365666535623339366633346232346137303937636439333534 Jan 15 05:44:06.991000 audit: BPF prog-id=110 op=UNLOAD Jan 15 05:44:06.991000 audit[2558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2441 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:06.991000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331623365666535623339366633346232346137303937636439333534 Jan 15 05:44:07.000000 audit: BPF prog-id=112 op=LOAD Jan 15 05:44:07.000000 audit[2558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2441 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:07.000000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331623365666535623339366633346232346137303937636439333534 Jan 15 05:44:07.265963 containerd[1603]: time="2026-01-15T05:44:07.265846564Z" level=info msg="StartContainer for \"4300301e4cc0268deb5ad32f8124f4c2af74ecc6a2da543f31e8e6c26758629f\" returns successfully" Jan 15 05:44:07.268291 containerd[1603]: time="2026-01-15T05:44:07.268201050Z" level=info msg="StartContainer for \"c1b3efe5b396f34b24a7097cd93549f952c6ffcf3141751d52be2213ecb73386\" returns successfully" Jan 15 05:44:07.268601 containerd[1603]: time="2026-01-15T05:44:07.268518993Z" level=info msg="StartContainer for \"424dd5f31de03a6175bdac7e89c6a358d2cdcc4d0cbc836c94d708ab09b79b4f\" returns successfully" Jan 15 05:44:07.312113 kubelet[2386]: I0115 05:44:07.312082 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:44:07.319554 kubelet[2386]: E0115 05:44:07.319497 2386 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Jan 15 05:44:07.563677 kubelet[2386]: E0115 05:44:07.563369 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:07.563812 kubelet[2386]: E0115 05:44:07.563690 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:07.573582 kubelet[2386]: E0115 05:44:07.573412 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:07.573778 kubelet[2386]: E0115 05:44:07.573733 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:07.579551 kubelet[2386]: E0115 05:44:07.579359 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:07.580000 kubelet[2386]: E0115 05:44:07.579955 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:08.590391 kubelet[2386]: E0115 05:44:08.590304 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:08.590997 kubelet[2386]: E0115 05:44:08.590643 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:08.593501 kubelet[2386]: E0115 05:44:08.593022 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:08.593501 kubelet[2386]: E0115 05:44:08.593148 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:08.594749 kubelet[2386]: E0115 05:44:08.594687 2386 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:44:08.595035 kubelet[2386]: E0115 05:44:08.594965 2386 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:08.922726 kubelet[2386]: I0115 05:44:08.922401 2386 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:44:09.356715 kubelet[2386]: E0115 05:44:09.356413 2386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 15 05:44:09.448834 kubelet[2386]: I0115 05:44:09.448744 2386 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 05:44:09.448943 kubelet[2386]: E0115 05:44:09.448856 2386 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 15 05:44:09.495230 kubelet[2386]: I0115 05:44:09.494952 2386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:09.501235 kubelet[2386]: E0115 05:44:09.501202 2386 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:09.501235 kubelet[2386]: I0115 05:44:09.501235 2386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 05:44:09.503225 kubelet[2386]: E0115 05:44:09.503128 2386 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 05:44:09.503398 kubelet[2386]: I0115 05:44:09.503318 2386 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:09.505644 kubelet[2386]: E0115 05:44:09.505567 2386 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:09.508030 kubelet[2386]: I0115 05:44:09.507892 2386 apiserver.go:52] "Watching apiserver" Jan 15 05:44:09.595008 kubelet[2386]: I0115 05:44:09.594888 2386 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 05:44:11.787306 systemd[1]: Reload requested from client PID 2664 ('systemctl') (unit session-8.scope)... Jan 15 05:44:11.787346 systemd[1]: Reloading... Jan 15 05:44:11.875541 zram_generator::config[2710]: No configuration found. Jan 15 05:44:12.104774 systemd[1]: Reloading finished in 316 ms. Jan 15 05:44:12.140302 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:44:12.162205 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 05:44:12.162742 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:44:12.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:12.162867 systemd[1]: kubelet.service: Consumed 1.807s CPU time, 133M memory peak. Jan 15 05:44:12.166551 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 15 05:44:12.166674 kernel: audit: type=1131 audit(1768455852.162:386): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:12.166232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:44:12.167000 audit: BPF prog-id=113 op=LOAD Jan 15 05:44:12.167000 audit: BPF prog-id=63 op=UNLOAD Jan 15 05:44:12.167000 audit: BPF prog-id=114 op=LOAD Jan 15 05:44:12.167000 audit: BPF prog-id=115 op=LOAD Jan 15 05:44:12.167000 audit: BPF prog-id=64 op=UNLOAD Jan 15 05:44:12.167000 audit: BPF prog-id=65 op=UNLOAD Jan 15 05:44:12.168000 audit: BPF prog-id=116 op=LOAD Jan 15 05:44:12.178519 kernel: audit: type=1334 audit(1768455852.167:387): prog-id=113 op=LOAD Jan 15 05:44:12.178554 kernel: audit: type=1334 audit(1768455852.167:388): prog-id=63 op=UNLOAD Jan 15 05:44:12.178575 kernel: audit: type=1334 audit(1768455852.167:389): prog-id=114 op=LOAD Jan 15 05:44:12.178597 kernel: audit: type=1334 audit(1768455852.167:390): prog-id=115 op=LOAD Jan 15 05:44:12.178612 kernel: audit: type=1334 audit(1768455852.167:391): prog-id=64 op=UNLOAD Jan 15 05:44:12.178669 kernel: audit: type=1334 audit(1768455852.167:392): prog-id=65 op=UNLOAD Jan 15 05:44:12.178690 kernel: audit: type=1334 audit(1768455852.168:393): prog-id=116 op=LOAD Jan 15 05:44:12.178707 kernel: audit: type=1334 audit(1768455852.168:394): prog-id=79 op=UNLOAD Jan 15 05:44:12.178725 kernel: audit: type=1334 audit(1768455852.169:395): prog-id=117 op=LOAD Jan 15 05:44:12.168000 audit: BPF prog-id=79 op=UNLOAD Jan 15 05:44:12.169000 audit: BPF prog-id=117 op=LOAD Jan 15 05:44:12.169000 audit: BPF prog-id=80 op=UNLOAD Jan 15 05:44:12.169000 audit: BPF prog-id=118 op=LOAD Jan 15 05:44:12.169000 audit: BPF prog-id=119 op=LOAD Jan 15 05:44:12.169000 audit: BPF prog-id=81 op=UNLOAD Jan 15 05:44:12.169000 audit: BPF prog-id=82 op=UNLOAD Jan 15 05:44:12.170000 audit: BPF prog-id=120 op=LOAD Jan 15 05:44:12.170000 audit: BPF prog-id=66 op=UNLOAD Jan 15 05:44:12.172000 audit: BPF prog-id=121 op=LOAD Jan 15 05:44:12.172000 audit: BPF prog-id=69 op=UNLOAD Jan 15 05:44:12.172000 audit: BPF prog-id=122 op=LOAD Jan 15 05:44:12.172000 audit: BPF prog-id=123 op=LOAD Jan 15 05:44:12.172000 audit: BPF prog-id=70 op=UNLOAD Jan 15 05:44:12.172000 audit: BPF prog-id=71 op=UNLOAD Jan 15 05:44:12.173000 audit: BPF prog-id=124 op=LOAD Jan 15 05:44:12.173000 audit: BPF prog-id=125 op=LOAD Jan 15 05:44:12.173000 audit: BPF prog-id=67 op=UNLOAD Jan 15 05:44:12.173000 audit: BPF prog-id=68 op=UNLOAD Jan 15 05:44:12.174000 audit: BPF prog-id=126 op=LOAD Jan 15 05:44:12.174000 audit: BPF prog-id=72 op=UNLOAD Jan 15 05:44:12.174000 audit: BPF prog-id=127 op=LOAD Jan 15 05:44:12.174000 audit: BPF prog-id=128 op=LOAD Jan 15 05:44:12.174000 audit: BPF prog-id=73 op=UNLOAD Jan 15 05:44:12.174000 audit: BPF prog-id=74 op=UNLOAD Jan 15 05:44:12.176000 audit: BPF prog-id=129 op=LOAD Jan 15 05:44:12.176000 audit: BPF prog-id=78 op=UNLOAD Jan 15 05:44:12.177000 audit: BPF prog-id=130 op=LOAD Jan 15 05:44:12.177000 audit: BPF prog-id=75 op=UNLOAD Jan 15 05:44:12.177000 audit: BPF prog-id=131 op=LOAD Jan 15 05:44:12.177000 audit: BPF prog-id=132 op=LOAD Jan 15 05:44:12.177000 audit: BPF prog-id=76 op=UNLOAD Jan 15 05:44:12.180000 audit: BPF prog-id=77 op=UNLOAD Jan 15 05:44:12.391553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:44:12.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:12.396940 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 05:44:12.458513 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:44:12.458513 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 05:44:12.458513 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:44:12.458513 kubelet[2755]: I0115 05:44:12.458119 2755 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 05:44:12.467054 kubelet[2755]: I0115 05:44:12.466909 2755 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 05:44:12.467054 kubelet[2755]: I0115 05:44:12.466960 2755 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 05:44:12.467193 kubelet[2755]: I0115 05:44:12.467178 2755 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 05:44:12.468341 kubelet[2755]: I0115 05:44:12.468285 2755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 05:44:12.470747 kubelet[2755]: I0115 05:44:12.470604 2755 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 05:44:12.476534 kubelet[2755]: I0115 05:44:12.476489 2755 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 05:44:12.483668 kubelet[2755]: I0115 05:44:12.483609 2755 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 05:44:12.483942 kubelet[2755]: I0115 05:44:12.483876 2755 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 05:44:12.484127 kubelet[2755]: I0115 05:44:12.483926 2755 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 05:44:12.484127 kubelet[2755]: I0115 05:44:12.484126 2755 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 05:44:12.484246 kubelet[2755]: I0115 05:44:12.484135 2755 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 05:44:12.484246 kubelet[2755]: I0115 05:44:12.484183 2755 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:44:12.484480 kubelet[2755]: I0115 05:44:12.484395 2755 kubelet.go:446] "Attempting to sync node with API server" Jan 15 05:44:12.484597 kubelet[2755]: I0115 05:44:12.484547 2755 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 05:44:12.484597 kubelet[2755]: I0115 05:44:12.484593 2755 kubelet.go:352] "Adding apiserver pod source" Jan 15 05:44:12.484639 kubelet[2755]: I0115 05:44:12.484605 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 05:44:12.486014 kubelet[2755]: I0115 05:44:12.485959 2755 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 05:44:12.490564 kubelet[2755]: I0115 05:44:12.488092 2755 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 05:44:12.490564 kubelet[2755]: I0115 05:44:12.488728 2755 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 05:44:12.490564 kubelet[2755]: I0115 05:44:12.488791 2755 server.go:1287] "Started kubelet" Jan 15 05:44:12.490564 kubelet[2755]: I0115 05:44:12.488835 2755 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 05:44:12.490564 kubelet[2755]: I0115 05:44:12.488875 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 05:44:12.490564 kubelet[2755]: I0115 05:44:12.489731 2755 server.go:479] "Adding debug handlers to kubelet server" Jan 15 05:44:12.490564 kubelet[2755]: I0115 05:44:12.489965 2755 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 05:44:12.491714 kubelet[2755]: I0115 05:44:12.491657 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 05:44:12.493616 kubelet[2755]: E0115 05:44:12.493505 2755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 05:44:12.493616 kubelet[2755]: I0115 05:44:12.493557 2755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 05:44:12.493616 kubelet[2755]: I0115 05:44:12.493561 2755 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 05:44:12.493800 kubelet[2755]: I0115 05:44:12.493727 2755 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 05:44:12.494363 kubelet[2755]: I0115 05:44:12.494292 2755 reconciler.go:26] "Reconciler: start to sync state" Jan 15 05:44:12.495613 kubelet[2755]: I0115 05:44:12.495597 2755 factory.go:221] Registration of the systemd container factory successfully Jan 15 05:44:12.495748 kubelet[2755]: I0115 05:44:12.495732 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 05:44:12.498915 kubelet[2755]: E0115 05:44:12.498758 2755 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 05:44:12.500080 kubelet[2755]: I0115 05:44:12.500010 2755 factory.go:221] Registration of the containerd container factory successfully Jan 15 05:44:12.517963 kubelet[2755]: I0115 05:44:12.517812 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 05:44:12.521048 kubelet[2755]: I0115 05:44:12.521003 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 05:44:12.521162 kubelet[2755]: I0115 05:44:12.521052 2755 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 05:44:12.521162 kubelet[2755]: I0115 05:44:12.521072 2755 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 05:44:12.521162 kubelet[2755]: I0115 05:44:12.521079 2755 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 05:44:12.521162 kubelet[2755]: E0115 05:44:12.521123 2755 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 05:44:12.565755 kubelet[2755]: I0115 05:44:12.565626 2755 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 05:44:12.565755 kubelet[2755]: I0115 05:44:12.565672 2755 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 05:44:12.565755 kubelet[2755]: I0115 05:44:12.565691 2755 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:44:12.565985 kubelet[2755]: I0115 05:44:12.565833 2755 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 05:44:12.565985 kubelet[2755]: I0115 05:44:12.565843 2755 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 05:44:12.565985 kubelet[2755]: I0115 05:44:12.565859 2755 policy_none.go:49] "None policy: Start" Jan 15 05:44:12.565985 kubelet[2755]: I0115 05:44:12.565868 2755 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 05:44:12.565985 kubelet[2755]: I0115 05:44:12.565878 2755 state_mem.go:35] "Initializing new in-memory state store" Jan 15 05:44:12.566082 kubelet[2755]: I0115 05:44:12.566004 2755 state_mem.go:75] "Updated machine memory state" Jan 15 05:44:12.572193 kubelet[2755]: I0115 05:44:12.572151 2755 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 05:44:12.572387 kubelet[2755]: I0115 05:44:12.572351 2755 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 05:44:12.572497 kubelet[2755]: I0115 05:44:12.572384 2755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 05:44:12.573067 kubelet[2755]: I0115 05:44:12.572918 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 05:44:12.575240 kubelet[2755]: E0115 05:44:12.575180 2755 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 05:44:12.622579 kubelet[2755]: I0115 05:44:12.622551 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 05:44:12.623950 kubelet[2755]: I0115 05:44:12.622918 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:12.624175 kubelet[2755]: I0115 05:44:12.623018 2755 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:12.683231 kubelet[2755]: I0115 05:44:12.682911 2755 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:44:12.694790 kubelet[2755]: I0115 05:44:12.694409 2755 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 15 05:44:12.694790 kubelet[2755]: I0115 05:44:12.694642 2755 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 05:44:12.695516 kubelet[2755]: I0115 05:44:12.695290 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 15 05:44:12.695967 kubelet[2755]: I0115 05:44:12.695816 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27195ebb2ba3f55722dd6baf2c99a444-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"27195ebb2ba3f55722dd6baf2c99a444\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:12.696574 kubelet[2755]: I0115 05:44:12.696246 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:12.696574 kubelet[2755]: I0115 05:44:12.696271 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:12.696574 kubelet[2755]: I0115 05:44:12.696289 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:12.696574 kubelet[2755]: I0115 05:44:12.696301 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:12.696574 kubelet[2755]: I0115 05:44:12.696315 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27195ebb2ba3f55722dd6baf2c99a444-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"27195ebb2ba3f55722dd6baf2c99a444\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:12.696683 kubelet[2755]: I0115 05:44:12.696327 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27195ebb2ba3f55722dd6baf2c99a444-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"27195ebb2ba3f55722dd6baf2c99a444\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:44:12.696683 kubelet[2755]: I0115 05:44:12.696340 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:44:12.930672 kubelet[2755]: E0115 05:44:12.930541 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:12.935391 kubelet[2755]: E0115 05:44:12.935255 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:12.939021 kubelet[2755]: E0115 05:44:12.938971 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:13.485841 kubelet[2755]: I0115 05:44:13.485771 2755 apiserver.go:52] "Watching apiserver" Jan 15 05:44:13.494277 kubelet[2755]: I0115 05:44:13.494248 2755 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 05:44:13.548036 kubelet[2755]: E0115 05:44:13.548002 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:13.548384 kubelet[2755]: E0115 05:44:13.548086 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:13.550985 kubelet[2755]: E0115 05:44:13.550864 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:13.569038 kubelet[2755]: I0115 05:44:13.568974 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.568963648 podStartE2EDuration="1.568963648s" podCreationTimestamp="2026-01-15 05:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:44:13.5683648 +0000 UTC m=+1.166407329" watchObservedRunningTime="2026-01-15 05:44:13.568963648 +0000 UTC m=+1.167006178" Jan 15 05:44:13.590757 kubelet[2755]: I0115 05:44:13.590593 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5905770559999999 podStartE2EDuration="1.590577056s" podCreationTimestamp="2026-01-15 05:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:44:13.590251428 +0000 UTC m=+1.188293988" watchObservedRunningTime="2026-01-15 05:44:13.590577056 +0000 UTC m=+1.188619585" Jan 15 05:44:13.609439 kubelet[2755]: I0115 05:44:13.609329 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.609310826 podStartE2EDuration="1.609310826s" podCreationTimestamp="2026-01-15 05:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:44:13.599187264 +0000 UTC m=+1.197229793" watchObservedRunningTime="2026-01-15 05:44:13.609310826 +0000 UTC m=+1.207353355" Jan 15 05:44:14.551050 kubelet[2755]: E0115 05:44:14.550982 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:14.551525 kubelet[2755]: E0115 05:44:14.551167 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:15.554642 kubelet[2755]: E0115 05:44:15.554564 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:15.742718 kubelet[2755]: E0115 05:44:15.742330 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:15.799938 kubelet[2755]: E0115 05:44:15.799734 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:17.171869 kubelet[2755]: I0115 05:44:17.171751 2755 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 05:44:17.172746 kubelet[2755]: I0115 05:44:17.172394 2755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 05:44:17.172809 containerd[1603]: time="2026-01-15T05:44:17.172121472Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 05:44:17.821026 systemd[1]: Created slice kubepods-besteffort-podcccf1272_d5b9_4ee2_9d56_e3a53f254f57.slice - libcontainer container kubepods-besteffort-podcccf1272_d5b9_4ee2_9d56_e3a53f254f57.slice. Jan 15 05:44:17.942857 kubelet[2755]: I0115 05:44:17.942765 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cccf1272-d5b9-4ee2-9d56-e3a53f254f57-kube-proxy\") pod \"kube-proxy-dtq27\" (UID: \"cccf1272-d5b9-4ee2-9d56-e3a53f254f57\") " pod="kube-system/kube-proxy-dtq27" Jan 15 05:44:17.942857 kubelet[2755]: I0115 05:44:17.942817 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cccf1272-d5b9-4ee2-9d56-e3a53f254f57-xtables-lock\") pod \"kube-proxy-dtq27\" (UID: \"cccf1272-d5b9-4ee2-9d56-e3a53f254f57\") " pod="kube-system/kube-proxy-dtq27" Jan 15 05:44:17.942857 kubelet[2755]: I0115 05:44:17.942834 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cccf1272-d5b9-4ee2-9d56-e3a53f254f57-lib-modules\") pod \"kube-proxy-dtq27\" (UID: \"cccf1272-d5b9-4ee2-9d56-e3a53f254f57\") " pod="kube-system/kube-proxy-dtq27" Jan 15 05:44:17.942857 kubelet[2755]: I0115 05:44:17.942850 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2zql\" (UniqueName: \"kubernetes.io/projected/cccf1272-d5b9-4ee2-9d56-e3a53f254f57-kube-api-access-v2zql\") pod \"kube-proxy-dtq27\" (UID: \"cccf1272-d5b9-4ee2-9d56-e3a53f254f57\") " pod="kube-system/kube-proxy-dtq27" Jan 15 05:44:18.130255 kubelet[2755]: E0115 05:44:18.130055 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:18.130962 containerd[1603]: time="2026-01-15T05:44:18.130757773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dtq27,Uid:cccf1272-d5b9-4ee2-9d56-e3a53f254f57,Namespace:kube-system,Attempt:0,}" Jan 15 05:44:18.183228 containerd[1603]: time="2026-01-15T05:44:18.183074607Z" level=info msg="connecting to shim 1344bc450466b620acc22d49e1e2ae190ac50b0dab09b039125420f381c9d342" address="unix:///run/containerd/s/8d8c226efc2b926fafaa9423a9f5fcea6f5a458fdabc058e552c1a6370b4c4c6" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:18.241770 systemd[1]: Started cri-containerd-1344bc450466b620acc22d49e1e2ae190ac50b0dab09b039125420f381c9d342.scope - libcontainer container 1344bc450466b620acc22d49e1e2ae190ac50b0dab09b039125420f381c9d342. Jan 15 05:44:18.263665 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 15 05:44:18.263949 kernel: audit: type=1334 audit(1768455858.257:428): prog-id=133 op=LOAD Jan 15 05:44:18.264064 kernel: audit: type=1334 audit(1768455858.258:429): prog-id=134 op=LOAD Jan 15 05:44:18.257000 audit: BPF prog-id=133 op=LOAD Jan 15 05:44:18.258000 audit: BPF prog-id=134 op=LOAD Jan 15 05:44:18.266564 kernel: audit: type=1300 audit(1768455858.258:429): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.258000 audit[2828]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.288543 kernel: audit: type=1327 audit(1768455858.258:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.291655 kernel: audit: type=1334 audit(1768455858.258:430): prog-id=134 op=UNLOAD Jan 15 05:44:18.258000 audit: BPF prog-id=134 op=UNLOAD Jan 15 05:44:18.258000 audit[2828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.302836 kernel: audit: type=1300 audit(1768455858.258:430): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.313540 kernel: audit: type=1327 audit(1768455858.258:430): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.259000 audit: BPF prog-id=135 op=LOAD Jan 15 05:44:18.316960 kernel: audit: type=1334 audit(1768455858.259:431): prog-id=135 op=LOAD Jan 15 05:44:18.259000 audit[2828]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.324635 kubelet[2755]: E0115 05:44:18.320720 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:18.324925 containerd[1603]: time="2026-01-15T05:44:18.319732739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dtq27,Uid:cccf1272-d5b9-4ee2-9d56-e3a53f254f57,Namespace:kube-system,Attempt:0,} returns sandbox id \"1344bc450466b620acc22d49e1e2ae190ac50b0dab09b039125420f381c9d342\"" Jan 15 05:44:18.324925 containerd[1603]: time="2026-01-15T05:44:18.322721842Z" level=info msg="CreateContainer within sandbox \"1344bc450466b620acc22d49e1e2ae190ac50b0dab09b039125420f381c9d342\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 05:44:18.328696 kernel: audit: type=1300 audit(1768455858.259:431): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.339982 kernel: audit: type=1327 audit(1768455858.259:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.259000 audit: BPF prog-id=136 op=LOAD Jan 15 05:44:18.259000 audit[2828]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.259000 audit: BPF prog-id=136 op=UNLOAD Jan 15 05:44:18.259000 audit[2828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.259000 audit: BPF prog-id=135 op=UNLOAD Jan 15 05:44:18.259000 audit[2828]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.259000 audit: BPF prog-id=137 op=LOAD Jan 15 05:44:18.259000 audit[2828]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2817 pid=2828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343462633435303436366236323061636332326434396531653261 Jan 15 05:44:18.358589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418631118.mount: Deactivated successfully. Jan 15 05:44:18.359113 containerd[1603]: time="2026-01-15T05:44:18.358603204Z" level=info msg="Container da9a2e74fcd9b52e34d6b4b724679bbb55d783bdae843602791dc5751613a857: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:18.362079 systemd[1]: Created slice kubepods-besteffort-pod370e809d_6afa_496e_bcd5_3598d6f6c5da.slice - libcontainer container kubepods-besteffort-pod370e809d_6afa_496e_bcd5_3598d6f6c5da.slice. Jan 15 05:44:18.375331 containerd[1603]: time="2026-01-15T05:44:18.375219941Z" level=info msg="CreateContainer within sandbox \"1344bc450466b620acc22d49e1e2ae190ac50b0dab09b039125420f381c9d342\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"da9a2e74fcd9b52e34d6b4b724679bbb55d783bdae843602791dc5751613a857\"" Jan 15 05:44:18.376148 containerd[1603]: time="2026-01-15T05:44:18.376053175Z" level=info msg="StartContainer for \"da9a2e74fcd9b52e34d6b4b724679bbb55d783bdae843602791dc5751613a857\"" Jan 15 05:44:18.378328 containerd[1603]: time="2026-01-15T05:44:18.378066498Z" level=info msg="connecting to shim da9a2e74fcd9b52e34d6b4b724679bbb55d783bdae843602791dc5751613a857" address="unix:///run/containerd/s/8d8c226efc2b926fafaa9423a9f5fcea6f5a458fdabc058e552c1a6370b4c4c6" protocol=ttrpc version=3 Jan 15 05:44:18.407667 systemd[1]: Started cri-containerd-da9a2e74fcd9b52e34d6b4b724679bbb55d783bdae843602791dc5751613a857.scope - libcontainer container da9a2e74fcd9b52e34d6b4b724679bbb55d783bdae843602791dc5751613a857. Jan 15 05:44:18.446179 kubelet[2755]: I0115 05:44:18.446156 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/370e809d-6afa-496e-bcd5-3598d6f6c5da-var-lib-calico\") pod \"tigera-operator-7dcd859c48-t6x75\" (UID: \"370e809d-6afa-496e-bcd5-3598d6f6c5da\") " pod="tigera-operator/tigera-operator-7dcd859c48-t6x75" Jan 15 05:44:18.446587 kubelet[2755]: I0115 05:44:18.446339 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hnm\" (UniqueName: \"kubernetes.io/projected/370e809d-6afa-496e-bcd5-3598d6f6c5da-kube-api-access-94hnm\") pod \"tigera-operator-7dcd859c48-t6x75\" (UID: \"370e809d-6afa-496e-bcd5-3598d6f6c5da\") " pod="tigera-operator/tigera-operator-7dcd859c48-t6x75" Jan 15 05:44:18.488000 audit: BPF prog-id=138 op=LOAD Jan 15 05:44:18.488000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2817 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461396132653734666364396235326533346436623462373234363739 Jan 15 05:44:18.488000 audit: BPF prog-id=139 op=LOAD Jan 15 05:44:18.488000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2817 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461396132653734666364396235326533346436623462373234363739 Jan 15 05:44:18.488000 audit: BPF prog-id=139 op=UNLOAD Jan 15 05:44:18.488000 audit[2852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461396132653734666364396235326533346436623462373234363739 Jan 15 05:44:18.488000 audit: BPF prog-id=138 op=UNLOAD Jan 15 05:44:18.488000 audit[2852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2817 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461396132653734666364396235326533346436623462373234363739 Jan 15 05:44:18.488000 audit: BPF prog-id=140 op=LOAD Jan 15 05:44:18.488000 audit[2852]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2817 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461396132653734666364396235326533346436623462373234363739 Jan 15 05:44:18.512564 containerd[1603]: time="2026-01-15T05:44:18.511894479Z" level=info msg="StartContainer for \"da9a2e74fcd9b52e34d6b4b724679bbb55d783bdae843602791dc5751613a857\" returns successfully" Jan 15 05:44:18.561871 kubelet[2755]: E0115 05:44:18.561798 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:18.669339 containerd[1603]: time="2026-01-15T05:44:18.669154290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-t6x75,Uid:370e809d-6afa-496e-bcd5-3598d6f6c5da,Namespace:tigera-operator,Attempt:0,}" Jan 15 05:44:18.688000 audit[2920]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.688000 audit[2920]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea8d86e00 a2=0 a3=7ffea8d86dec items=0 ppid=2864 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.688000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 05:44:18.689000 audit[2919]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.689000 audit[2919]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddbaffca0 a2=0 a3=78c6c23967305059 items=0 ppid=2864 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 05:44:18.692000 audit[2923]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.692000 audit[2923]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe530b66b0 a2=0 a3=7ffe530b669c items=0 ppid=2864 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 05:44:18.693000 audit[2922]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.693000 audit[2922]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd9e63ac0 a2=0 a3=7ffdd9e63aac items=0 ppid=2864 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 05:44:18.695000 audit[2929]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.695000 audit[2929]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd63c35630 a2=0 a3=7ffd63c3561c items=0 ppid=2864 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 05:44:18.698097 containerd[1603]: time="2026-01-15T05:44:18.697995659Z" level=info msg="connecting to shim b09471ed70ebbf841b89a4708db31ee63dde3856d915788f0dd1aeaae357b42e" address="unix:///run/containerd/s/fa60afb3d91caeadeae0165d1050512d01e9bb8901cfec6d9542ab721d89415d" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:18.697000 audit[2931]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.697000 audit[2931]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec20456d0 a2=0 a3=7ffec20456bc items=0 ppid=2864 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 05:44:18.738138 systemd[1]: Started cri-containerd-b09471ed70ebbf841b89a4708db31ee63dde3856d915788f0dd1aeaae357b42e.scope - libcontainer container b09471ed70ebbf841b89a4708db31ee63dde3856d915788f0dd1aeaae357b42e. Jan 15 05:44:18.751000 audit: BPF prog-id=141 op=LOAD Jan 15 05:44:18.751000 audit: BPF prog-id=142 op=LOAD Jan 15 05:44:18.751000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2934 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437316564373065626266383431623839613437303864623331 Jan 15 05:44:18.751000 audit: BPF prog-id=142 op=UNLOAD Jan 15 05:44:18.751000 audit[2947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437316564373065626266383431623839613437303864623331 Jan 15 05:44:18.752000 audit: BPF prog-id=143 op=LOAD Jan 15 05:44:18.752000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2934 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437316564373065626266383431623839613437303864623331 Jan 15 05:44:18.752000 audit: BPF prog-id=144 op=LOAD Jan 15 05:44:18.752000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2934 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437316564373065626266383431623839613437303864623331 Jan 15 05:44:18.752000 audit: BPF prog-id=144 op=UNLOAD Jan 15 05:44:18.752000 audit[2947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437316564373065626266383431623839613437303864623331 Jan 15 05:44:18.752000 audit: BPF prog-id=143 op=UNLOAD Jan 15 05:44:18.752000 audit[2947]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437316564373065626266383431623839613437303864623331 Jan 15 05:44:18.752000 audit: BPF prog-id=145 op=LOAD Jan 15 05:44:18.752000 audit[2947]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2934 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230393437316564373065626266383431623839613437303864623331 Jan 15 05:44:18.791098 containerd[1603]: time="2026-01-15T05:44:18.790982434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-t6x75,Uid:370e809d-6afa-496e-bcd5-3598d6f6c5da,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b09471ed70ebbf841b89a4708db31ee63dde3856d915788f0dd1aeaae357b42e\"" Jan 15 05:44:18.793245 containerd[1603]: time="2026-01-15T05:44:18.793115063Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 15 05:44:18.796000 audit[2972]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.796000 audit[2972]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffda3636690 a2=0 a3=7ffda363667c items=0 ppid=2864 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 05:44:18.801000 audit[2974]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.801000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff7a7f0300 a2=0 a3=7fff7a7f02ec items=0 ppid=2864 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 15 05:44:18.808000 audit[2977]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.808000 audit[2977]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc9cf49d80 a2=0 a3=7ffc9cf49d6c items=0 ppid=2864 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 15 05:44:18.811000 audit[2978]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.811000 audit[2978]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdac781bb0 a2=0 a3=7ffdac781b9c items=0 ppid=2864 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 05:44:18.817000 audit[2980]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.817000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffec7b71320 a2=0 a3=7ffec7b7130c items=0 ppid=2864 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 05:44:18.819000 audit[2981]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.819000 audit[2981]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc3f22f80 a2=0 a3=7ffcc3f22f6c items=0 ppid=2864 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 05:44:18.825000 audit[2983]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.825000 audit[2983]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd52974a20 a2=0 a3=7ffd52974a0c items=0 ppid=2864 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 05:44:18.833000 audit[2986]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.833000 audit[2986]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1539fc10 a2=0 a3=7ffd1539fbfc items=0 ppid=2864 pid=2986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 15 05:44:18.836000 audit[2987]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.836000 audit[2987]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9dadfba0 a2=0 a3=7ffc9dadfb8c items=0 ppid=2864 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 05:44:18.841000 audit[2989]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.841000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd933cbda0 a2=0 a3=7ffd933cbd8c items=0 ppid=2864 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.841000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 05:44:18.844000 audit[2990]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.844000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd80e9c0d0 a2=0 a3=7ffd80e9c0bc items=0 ppid=2864 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 05:44:18.849000 audit[2992]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.849000 audit[2992]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefad968e0 a2=0 a3=7ffefad968cc items=0 ppid=2864 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 05:44:18.856000 audit[2995]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.856000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe93a0c460 a2=0 a3=7ffe93a0c44c items=0 ppid=2864 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 05:44:18.864000 audit[2998]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.864000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbe034490 a2=0 a3=7ffdbe03447c items=0 ppid=2864 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.864000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 05:44:18.866000 audit[2999]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.866000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcffaf5ee0 a2=0 a3=7ffcffaf5ecc items=0 ppid=2864 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 05:44:18.871000 audit[3001]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.871000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcbd614de0 a2=0 a3=7ffcbd614dcc items=0 ppid=2864 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.871000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:44:18.879000 audit[3004]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.879000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc98ab3b20 a2=0 a3=7ffc98ab3b0c items=0 ppid=2864 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:44:18.882000 audit[3005]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.882000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1fa17a50 a2=0 a3=7ffe1fa17a3c items=0 ppid=2864 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 05:44:18.887000 audit[3007]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:44:18.887000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc1a260ec0 a2=0 a3=7ffc1a260eac items=0 ppid=2864 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 05:44:18.923000 audit[3013]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:18.923000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe04cf66e0 a2=0 a3=7ffe04cf66cc items=0 ppid=2864 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.923000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:18.933000 audit[3013]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:18.933000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe04cf66e0 a2=0 a3=7ffe04cf66cc items=0 ppid=2864 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.933000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:18.936000 audit[3018]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.936000 audit[3018]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd08a49120 a2=0 a3=7ffd08a4910c items=0 ppid=2864 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 05:44:18.941000 audit[3020]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.941000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd69326780 a2=0 a3=7ffd6932676c items=0 ppid=2864 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 15 05:44:18.949000 audit[3023]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.949000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc51acdc20 a2=0 a3=7ffc51acdc0c items=0 ppid=2864 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 15 05:44:18.951000 audit[3024]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.951000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddf43d320 a2=0 a3=7ffddf43d30c items=0 ppid=2864 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 05:44:18.957000 audit[3026]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.957000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1e8115a0 a2=0 a3=7ffc1e81158c items=0 ppid=2864 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 05:44:18.959000 audit[3027]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.959000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8b646b90 a2=0 a3=7ffd8b646b7c items=0 ppid=2864 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 05:44:18.964000 audit[3029]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.964000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeab6de930 a2=0 a3=7ffeab6de91c items=0 ppid=2864 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 15 05:44:18.973000 audit[3032]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.973000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffdbb8abf0 a2=0 a3=7fffdbb8abdc items=0 ppid=2864 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 05:44:18.975000 audit[3033]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.975000 audit[3033]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc54953d40 a2=0 a3=7ffc54953d2c items=0 ppid=2864 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 05:44:18.982000 audit[3035]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.982000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc29b5c920 a2=0 a3=7ffc29b5c90c items=0 ppid=2864 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 05:44:18.984000 audit[3036]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.984000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb266c760 a2=0 a3=7ffcb266c74c items=0 ppid=2864 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 05:44:18.990000 audit[3038]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.990000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf5fd1c20 a2=0 a3=7ffdf5fd1c0c items=0 ppid=2864 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 05:44:18.997000 audit[3041]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:18.997000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0190e420 a2=0 a3=7ffd0190e40c items=0 ppid=2864 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:18.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 05:44:19.005000 audit[3044]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.005000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf241ee90 a2=0 a3=7ffdf241ee7c items=0 ppid=2864 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 15 05:44:19.007000 audit[3045]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.007000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc88e09a80 a2=0 a3=7ffc88e09a6c items=0 ppid=2864 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 05:44:19.012000 audit[3047]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.012000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff5dc2df80 a2=0 a3=7fff5dc2df6c items=0 ppid=2864 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:44:19.019000 audit[3050]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.019000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff2d1fab20 a2=0 a3=7fff2d1fab0c items=0 ppid=2864 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:44:19.022000 audit[3051]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.022000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf7cb0900 a2=0 a3=7ffdf7cb08ec items=0 ppid=2864 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 05:44:19.026000 audit[3053]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.026000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd3e41a580 a2=0 a3=7ffd3e41a56c items=0 ppid=2864 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 05:44:19.028000 audit[3054]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.028000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc96d7f80 a2=0 a3=7fffc96d7f6c items=0 ppid=2864 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 05:44:19.033000 audit[3056]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.033000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffce8ab8410 a2=0 a3=7ffce8ab83fc items=0 ppid=2864 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:44:19.041000 audit[3059]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:44:19.041000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd63c917e0 a2=0 a3=7ffd63c917cc items=0 ppid=2864 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.041000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:44:19.047000 audit[3061]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 05:44:19.047000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc260db390 a2=0 a3=7ffc260db37c items=0 ppid=2864 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.047000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:19.048000 audit[3061]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 05:44:19.048000 audit[3061]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc260db390 a2=0 a3=7ffc260db37c items=0 ppid=2864 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:19.048000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:19.059864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222345081.mount: Deactivated successfully. Jan 15 05:44:19.717023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263443700.mount: Deactivated successfully. Jan 15 05:44:20.285941 containerd[1603]: time="2026-01-15T05:44:20.285864975Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:20.287285 containerd[1603]: time="2026-01-15T05:44:20.287164700Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 15 05:44:20.288718 containerd[1603]: time="2026-01-15T05:44:20.288670464Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:20.291196 containerd[1603]: time="2026-01-15T05:44:20.291153622Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:20.291908 containerd[1603]: time="2026-01-15T05:44:20.291836523Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.498663194s" Jan 15 05:44:20.291908 containerd[1603]: time="2026-01-15T05:44:20.291880926Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 15 05:44:20.294026 containerd[1603]: time="2026-01-15T05:44:20.293956608Z" level=info msg="CreateContainer within sandbox \"b09471ed70ebbf841b89a4708db31ee63dde3856d915788f0dd1aeaae357b42e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 05:44:20.304973 containerd[1603]: time="2026-01-15T05:44:20.304829929Z" level=info msg="Container 7c6835f8c4ec44f18269cf0faf7527dad5d28b2955cfc9d703cd1c44f8d7229e: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:20.313179 containerd[1603]: time="2026-01-15T05:44:20.313100035Z" level=info msg="CreateContainer within sandbox \"b09471ed70ebbf841b89a4708db31ee63dde3856d915788f0dd1aeaae357b42e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7c6835f8c4ec44f18269cf0faf7527dad5d28b2955cfc9d703cd1c44f8d7229e\"" Jan 15 05:44:20.314072 containerd[1603]: time="2026-01-15T05:44:20.314000318Z" level=info msg="StartContainer for \"7c6835f8c4ec44f18269cf0faf7527dad5d28b2955cfc9d703cd1c44f8d7229e\"" Jan 15 05:44:20.315157 containerd[1603]: time="2026-01-15T05:44:20.315095992Z" level=info msg="connecting to shim 7c6835f8c4ec44f18269cf0faf7527dad5d28b2955cfc9d703cd1c44f8d7229e" address="unix:///run/containerd/s/fa60afb3d91caeadeae0165d1050512d01e9bb8901cfec6d9542ab721d89415d" protocol=ttrpc version=3 Jan 15 05:44:20.339694 systemd[1]: Started cri-containerd-7c6835f8c4ec44f18269cf0faf7527dad5d28b2955cfc9d703cd1c44f8d7229e.scope - libcontainer container 7c6835f8c4ec44f18269cf0faf7527dad5d28b2955cfc9d703cd1c44f8d7229e. Jan 15 05:44:20.353000 audit: BPF prog-id=146 op=LOAD Jan 15 05:44:20.353000 audit: BPF prog-id=147 op=LOAD Jan 15 05:44:20.353000 audit[3070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2934 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:20.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763363833356638633465633434663138323639636630666166373532 Jan 15 05:44:20.353000 audit: BPF prog-id=147 op=UNLOAD Jan 15 05:44:20.353000 audit[3070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:20.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763363833356638633465633434663138323639636630666166373532 Jan 15 05:44:20.354000 audit: BPF prog-id=148 op=LOAD Jan 15 05:44:20.354000 audit[3070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2934 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:20.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763363833356638633465633434663138323639636630666166373532 Jan 15 05:44:20.354000 audit: BPF prog-id=149 op=LOAD Jan 15 05:44:20.354000 audit[3070]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2934 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:20.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763363833356638633465633434663138323639636630666166373532 Jan 15 05:44:20.354000 audit: BPF prog-id=149 op=UNLOAD Jan 15 05:44:20.354000 audit[3070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:20.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763363833356638633465633434663138323639636630666166373532 Jan 15 05:44:20.354000 audit: BPF prog-id=148 op=UNLOAD Jan 15 05:44:20.354000 audit[3070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:20.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763363833356638633465633434663138323639636630666166373532 Jan 15 05:44:20.354000 audit: BPF prog-id=150 op=LOAD Jan 15 05:44:20.354000 audit[3070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2934 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:20.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763363833356638633465633434663138323639636630666166373532 Jan 15 05:44:20.376397 containerd[1603]: time="2026-01-15T05:44:20.376212125Z" level=info msg="StartContainer for \"7c6835f8c4ec44f18269cf0faf7527dad5d28b2955cfc9d703cd1c44f8d7229e\" returns successfully" Jan 15 05:44:20.580302 kubelet[2755]: I0115 05:44:20.580071 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-t6x75" podStartSLOduration=1.079819447 podStartE2EDuration="2.58005684s" podCreationTimestamp="2026-01-15 05:44:18 +0000 UTC" firstStartedPulling="2026-01-15 05:44:18.792387684 +0000 UTC m=+6.390430214" lastFinishedPulling="2026-01-15 05:44:20.292625077 +0000 UTC m=+7.890667607" observedRunningTime="2026-01-15 05:44:20.579370677 +0000 UTC m=+8.177413207" watchObservedRunningTime="2026-01-15 05:44:20.58005684 +0000 UTC m=+8.178099370" Jan 15 05:44:20.580302 kubelet[2755]: I0115 05:44:20.580265 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dtq27" podStartSLOduration=3.580259105 podStartE2EDuration="3.580259105s" podCreationTimestamp="2026-01-15 05:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:44:18.577130219 +0000 UTC m=+6.175172750" watchObservedRunningTime="2026-01-15 05:44:20.580259105 +0000 UTC m=+8.178301635" Jan 15 05:44:24.409291 kubelet[2755]: E0115 05:44:24.409156 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:24.581711 kubelet[2755]: E0115 05:44:24.581645 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:25.751518 kubelet[2755]: E0115 05:44:25.751039 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:25.812949 kubelet[2755]: E0115 05:44:25.812616 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:25.926223 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 15 05:44:25.925000 audit[1820]: USER_END pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:44:25.930502 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 15 05:44:25.930578 kernel: audit: type=1106 audit(1768455865.925:508): pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:44:25.944912 sshd[1819]: Connection closed by 10.0.0.1 port 33486 Jan 15 05:44:25.942287 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jan 15 05:44:25.926000 audit[1820]: CRED_DISP pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:44:25.957557 kernel: audit: type=1104 audit(1768455865.926:509): pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:44:25.961802 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Jan 15 05:44:25.962961 systemd[1]: sshd@6-10.0.0.123:22-10.0.0.1:33486.service: Deactivated successfully. Jan 15 05:44:25.956000 audit[1814]: USER_END pid=1814 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:44:25.975051 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 05:44:25.975378 systemd[1]: session-8.scope: Consumed 4.962s CPU time, 214.9M memory peak. Jan 15 05:44:25.984498 kernel: audit: type=1106 audit(1768455865.956:510): pid=1814 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:44:25.956000 audit[1814]: CRED_DISP pid=1814 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:44:25.986953 systemd-logind[1578]: Removed session 8. Jan 15 05:44:26.001517 kernel: audit: type=1104 audit(1768455865.956:511): pid=1814 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:44:25.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.123:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:26.023604 kernel: audit: type=1131 audit(1768455865.963:512): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.123:22-10.0.0.1:33486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:44:26.381000 audit[3160]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:26.392525 kernel: audit: type=1325 audit(1768455866.381:513): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:26.381000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe5410dc80 a2=0 a3=7ffe5410dc6c items=0 ppid=2864 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:26.411586 kernel: audit: type=1300 audit(1768455866.381:513): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe5410dc80 a2=0 a3=7ffe5410dc6c items=0 ppid=2864 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:26.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:26.426684 kernel: audit: type=1327 audit(1768455866.381:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:26.393000 audit[3160]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:26.436606 kernel: audit: type=1325 audit(1768455866.393:514): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:26.436726 kernel: audit: type=1300 audit(1768455866.393:514): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5410dc80 a2=0 a3=0 items=0 ppid=2864 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:26.393000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5410dc80 a2=0 a3=0 items=0 ppid=2864 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:26.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:26.449000 audit[3162]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:26.449000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffddf5ad220 a2=0 a3=7ffddf5ad20c items=0 ppid=2864 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:26.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:26.461000 audit[3162]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:26.461000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffddf5ad220 a2=0 a3=0 items=0 ppid=2864 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:26.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:26.588108 kubelet[2755]: E0115 05:44:26.588072 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:28.724000 audit[3164]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:28.724000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff41b40dd0 a2=0 a3=7fff41b40dbc items=0 ppid=2864 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:28.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:28.735000 audit[3164]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:28.735000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff41b40dd0 a2=0 a3=0 items=0 ppid=2864 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:28.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:28.777000 audit[3166]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:28.777000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcae40c510 a2=0 a3=7ffcae40c4fc items=0 ppid=2864 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:28.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:28.784000 audit[3166]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:28.784000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcae40c510 a2=0 a3=0 items=0 ppid=2864 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:28.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:29.468639 update_engine[1579]: I20260115 05:44:29.468525 1579 update_attempter.cc:509] Updating boot flags... Jan 15 05:44:29.803000 audit[3184]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:29.803000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffd99c7da0 a2=0 a3=7fffd99c7d8c items=0 ppid=2864 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:29.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:29.814000 audit[3184]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:29.814000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd99c7da0 a2=0 a3=0 items=0 ppid=2864 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:29.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:30.860000 audit[3186]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:30.860000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd8f2ed980 a2=0 a3=7ffd8f2ed96c items=0 ppid=2864 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:30.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:30.876000 audit[3186]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:30.876000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8f2ed980 a2=0 a3=0 items=0 ppid=2864 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:30.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:30.897504 systemd[1]: Created slice kubepods-besteffort-podcdf0a94f_9abb_4abe_a5de_d6c231b9c063.slice - libcontainer container kubepods-besteffort-podcdf0a94f_9abb_4abe_a5de_d6c231b9c063.slice. Jan 15 05:44:30.945634 kubelet[2755]: I0115 05:44:30.945569 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf0a94f-9abb-4abe-a5de-d6c231b9c063-tigera-ca-bundle\") pod \"calico-typha-797886cd65-h7h4q\" (UID: \"cdf0a94f-9abb-4abe-a5de-d6c231b9c063\") " pod="calico-system/calico-typha-797886cd65-h7h4q" Jan 15 05:44:30.945634 kubelet[2755]: I0115 05:44:30.945640 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb7vj\" (UniqueName: \"kubernetes.io/projected/cdf0a94f-9abb-4abe-a5de-d6c231b9c063-kube-api-access-pb7vj\") pod \"calico-typha-797886cd65-h7h4q\" (UID: \"cdf0a94f-9abb-4abe-a5de-d6c231b9c063\") " pod="calico-system/calico-typha-797886cd65-h7h4q" Jan 15 05:44:30.946084 kubelet[2755]: I0115 05:44:30.945661 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cdf0a94f-9abb-4abe-a5de-d6c231b9c063-typha-certs\") pod \"calico-typha-797886cd65-h7h4q\" (UID: \"cdf0a94f-9abb-4abe-a5de-d6c231b9c063\") " pod="calico-system/calico-typha-797886cd65-h7h4q" Jan 15 05:44:31.081365 systemd[1]: Created slice kubepods-besteffort-pod633bdf6b_32ca_4a0a_a687_8f1c45d119c9.slice - libcontainer container kubepods-besteffort-pod633bdf6b_32ca_4a0a_a687_8f1c45d119c9.slice. Jan 15 05:44:31.148055 kubelet[2755]: I0115 05:44:31.147624 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-cni-net-dir\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148055 kubelet[2755]: I0115 05:44:31.147692 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-lib-modules\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148055 kubelet[2755]: I0115 05:44:31.147709 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-var-lib-calico\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148055 kubelet[2755]: I0115 05:44:31.147727 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-cni-log-dir\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148055 kubelet[2755]: I0115 05:44:31.147743 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-node-certs\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148250 kubelet[2755]: I0115 05:44:31.147816 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-cni-bin-dir\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148250 kubelet[2755]: I0115 05:44:31.147863 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-policysync\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148250 kubelet[2755]: I0115 05:44:31.147915 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-tigera-ca-bundle\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148250 kubelet[2755]: I0115 05:44:31.147949 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g78k\" (UniqueName: \"kubernetes.io/projected/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-kube-api-access-6g78k\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148250 kubelet[2755]: I0115 05:44:31.147984 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-var-run-calico\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148355 kubelet[2755]: I0115 05:44:31.148003 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-flexvol-driver-host\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.148355 kubelet[2755]: I0115 05:44:31.148016 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/633bdf6b-32ca-4a0a-a687-8f1c45d119c9-xtables-lock\") pod \"calico-node-7z7pn\" (UID: \"633bdf6b-32ca-4a0a-a687-8f1c45d119c9\") " pod="calico-system/calico-node-7z7pn" Jan 15 05:44:31.202827 kubelet[2755]: E0115 05:44:31.202667 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:31.203930 containerd[1603]: time="2026-01-15T05:44:31.203765618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797886cd65-h7h4q,Uid:cdf0a94f-9abb-4abe-a5de-d6c231b9c063,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:31.234231 containerd[1603]: time="2026-01-15T05:44:31.234046622Z" level=info msg="connecting to shim e05a1824cae25151903c96a8390e635bed478d7984dcbed74a8ecddc63f71bc8" address="unix:///run/containerd/s/66b58245ae47f4006b9cc7d87ca581142280d811313a05810de4d2ec250e6b8d" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:31.264006 kubelet[2755]: E0115 05:44:31.263950 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.264006 kubelet[2755]: W0115 05:44:31.263998 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.264129 kubelet[2755]: E0115 05:44:31.264027 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.277093 kubelet[2755]: E0115 05:44:31.276917 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.277093 kubelet[2755]: W0115 05:44:31.276937 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.277093 kubelet[2755]: E0115 05:44:31.276955 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.278160 kubelet[2755]: E0115 05:44:31.277726 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:44:31.304792 systemd[1]: Started cri-containerd-e05a1824cae25151903c96a8390e635bed478d7984dcbed74a8ecddc63f71bc8.scope - libcontainer container e05a1824cae25151903c96a8390e635bed478d7984dcbed74a8ecddc63f71bc8. Jan 15 05:44:31.328000 audit: BPF prog-id=151 op=LOAD Jan 15 05:44:31.331876 kernel: kauditd_printk_skb: 31 callbacks suppressed Jan 15 05:44:31.331987 kernel: audit: type=1334 audit(1768455871.328:525): prog-id=151 op=LOAD Jan 15 05:44:31.329000 audit: BPF prog-id=152 op=LOAD Jan 15 05:44:31.340179 kernel: audit: type=1334 audit(1768455871.329:526): prog-id=152 op=LOAD Jan 15 05:44:31.356098 kernel: audit: type=1300 audit(1768455871.329:526): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.329000 audit[3209]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.356298 kubelet[2755]: E0115 05:44:31.347749 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.356298 kubelet[2755]: W0115 05:44:31.347768 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.356298 kubelet[2755]: E0115 05:44:31.347864 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.356298 kubelet[2755]: E0115 05:44:31.348359 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.356298 kubelet[2755]: W0115 05:44:31.348378 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.356298 kubelet[2755]: E0115 05:44:31.348482 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.356298 kubelet[2755]: E0115 05:44:31.348875 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.356298 kubelet[2755]: W0115 05:44:31.348886 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.356298 kubelet[2755]: E0115 05:44:31.348901 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.356298 kubelet[2755]: E0115 05:44:31.349247 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.356907 kubelet[2755]: W0115 05:44:31.349256 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.356907 kubelet[2755]: E0115 05:44:31.349267 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.356907 kubelet[2755]: E0115 05:44:31.349696 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.356907 kubelet[2755]: W0115 05:44:31.349706 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.356907 kubelet[2755]: E0115 05:44:31.349716 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.356907 kubelet[2755]: E0115 05:44:31.350104 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.356907 kubelet[2755]: W0115 05:44:31.350112 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.356907 kubelet[2755]: E0115 05:44:31.350122 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.356907 kubelet[2755]: E0115 05:44:31.350386 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.356907 kubelet[2755]: W0115 05:44:31.350477 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357164 kubelet[2755]: E0115 05:44:31.350488 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357164 kubelet[2755]: E0115 05:44:31.350805 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357164 kubelet[2755]: W0115 05:44:31.350814 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357164 kubelet[2755]: E0115 05:44:31.350822 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357164 kubelet[2755]: E0115 05:44:31.351264 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357164 kubelet[2755]: W0115 05:44:31.351275 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357164 kubelet[2755]: E0115 05:44:31.351283 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357164 kubelet[2755]: E0115 05:44:31.351695 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357164 kubelet[2755]: W0115 05:44:31.351703 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357164 kubelet[2755]: E0115 05:44:31.351713 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357663 kubelet[2755]: E0115 05:44:31.352016 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357663 kubelet[2755]: W0115 05:44:31.352025 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357663 kubelet[2755]: E0115 05:44:31.352033 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357663 kubelet[2755]: E0115 05:44:31.352264 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357663 kubelet[2755]: W0115 05:44:31.352272 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357663 kubelet[2755]: E0115 05:44:31.352282 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357663 kubelet[2755]: E0115 05:44:31.352724 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357663 kubelet[2755]: W0115 05:44:31.352735 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357663 kubelet[2755]: E0115 05:44:31.352743 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357663 kubelet[2755]: E0115 05:44:31.353023 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357934 kubelet[2755]: W0115 05:44:31.353032 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357934 kubelet[2755]: E0115 05:44:31.353042 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357934 kubelet[2755]: E0115 05:44:31.353332 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357934 kubelet[2755]: W0115 05:44:31.353342 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357934 kubelet[2755]: E0115 05:44:31.353351 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357934 kubelet[2755]: E0115 05:44:31.353669 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357934 kubelet[2755]: W0115 05:44:31.353678 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.357934 kubelet[2755]: E0115 05:44:31.353686 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.357934 kubelet[2755]: E0115 05:44:31.354139 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.357934 kubelet[2755]: W0115 05:44:31.354148 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358211 kubelet[2755]: E0115 05:44:31.354156 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.358211 kubelet[2755]: E0115 05:44:31.354533 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.358211 kubelet[2755]: W0115 05:44:31.354542 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358211 kubelet[2755]: E0115 05:44:31.354550 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.358211 kubelet[2755]: E0115 05:44:31.354790 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.358211 kubelet[2755]: W0115 05:44:31.354800 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358211 kubelet[2755]: E0115 05:44:31.354808 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.358211 kubelet[2755]: E0115 05:44:31.355106 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.358211 kubelet[2755]: W0115 05:44:31.355114 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358211 kubelet[2755]: E0115 05:44:31.355123 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.358685 kubelet[2755]: E0115 05:44:31.355684 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.358685 kubelet[2755]: W0115 05:44:31.355695 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358685 kubelet[2755]: E0115 05:44:31.355705 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.358685 kubelet[2755]: I0115 05:44:31.355764 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba42b2-88e3-4065-bd62-0b6bb90b29e9-registration-dir\") pod \"csi-node-driver-nzlc6\" (UID: \"e9ba42b2-88e3-4065-bd62-0b6bb90b29e9\") " pod="calico-system/csi-node-driver-nzlc6" Jan 15 05:44:31.358685 kubelet[2755]: E0115 05:44:31.356208 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.358685 kubelet[2755]: W0115 05:44:31.356218 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358685 kubelet[2755]: E0115 05:44:31.356266 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.358685 kubelet[2755]: E0115 05:44:31.356855 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.358685 kubelet[2755]: W0115 05:44:31.356863 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358916 kubelet[2755]: E0115 05:44:31.356877 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.358916 kubelet[2755]: E0115 05:44:31.357122 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.358916 kubelet[2755]: W0115 05:44:31.357129 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358916 kubelet[2755]: E0115 05:44:31.357138 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.358916 kubelet[2755]: I0115 05:44:31.357325 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e9ba42b2-88e3-4065-bd62-0b6bb90b29e9-varrun\") pod \"csi-node-driver-nzlc6\" (UID: \"e9ba42b2-88e3-4065-bd62-0b6bb90b29e9\") " pod="calico-system/csi-node-driver-nzlc6" Jan 15 05:44:31.358916 kubelet[2755]: E0115 05:44:31.357718 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.358916 kubelet[2755]: W0115 05:44:31.357727 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.358916 kubelet[2755]: E0115 05:44:31.357740 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.358916 kubelet[2755]: E0115 05:44:31.358150 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.359149 kubelet[2755]: W0115 05:44:31.358158 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.359149 kubelet[2755]: E0115 05:44:31.358209 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.359793 kubelet[2755]: E0115 05:44:31.359646 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.359793 kubelet[2755]: W0115 05:44:31.359669 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.359793 kubelet[2755]: E0115 05:44:31.359683 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.360274 kubelet[2755]: I0115 05:44:31.360232 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wcxh\" (UniqueName: \"kubernetes.io/projected/e9ba42b2-88e3-4065-bd62-0b6bb90b29e9-kube-api-access-5wcxh\") pod \"csi-node-driver-nzlc6\" (UID: \"e9ba42b2-88e3-4065-bd62-0b6bb90b29e9\") " pod="calico-system/csi-node-driver-nzlc6" Jan 15 05:44:31.365830 kubelet[2755]: E0115 05:44:31.365750 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.366850 kubelet[2755]: W0115 05:44:31.365907 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.366850 kubelet[2755]: E0115 05:44:31.366110 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.368069 kubelet[2755]: E0115 05:44:31.367988 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.368069 kubelet[2755]: W0115 05:44:31.368049 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.368934 kubelet[2755]: E0115 05:44:31.368844 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.370293 kubelet[2755]: I0115 05:44:31.370259 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba42b2-88e3-4065-bd62-0b6bb90b29e9-kubelet-dir\") pod \"csi-node-driver-nzlc6\" (UID: \"e9ba42b2-88e3-4065-bd62-0b6bb90b29e9\") " pod="calico-system/csi-node-driver-nzlc6" Jan 15 05:44:31.372503 kubelet[2755]: E0115 05:44:31.371952 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.372503 kubelet[2755]: W0115 05:44:31.372132 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.372503 kubelet[2755]: E0115 05:44:31.372238 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.329000 audit: BPF prog-id=152 op=UNLOAD Jan 15 05:44:31.374793 kubelet[2755]: E0115 05:44:31.374688 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.374793 kubelet[2755]: W0115 05:44:31.374749 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.375657 kubelet[2755]: E0115 05:44:31.375535 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.376015 kubelet[2755]: I0115 05:44:31.375923 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba42b2-88e3-4065-bd62-0b6bb90b29e9-socket-dir\") pod \"csi-node-driver-nzlc6\" (UID: \"e9ba42b2-88e3-4065-bd62-0b6bb90b29e9\") " pod="calico-system/csi-node-driver-nzlc6" Jan 15 05:44:31.376494 kernel: audit: type=1327 audit(1768455871.329:526): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.376554 kernel: audit: type=1334 audit(1768455871.329:527): prog-id=152 op=UNLOAD Jan 15 05:44:31.376642 kernel: audit: type=1300 audit(1768455871.329:527): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.329000 audit[3209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.378281 kubelet[2755]: E0115 05:44:31.378078 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.378281 kubelet[2755]: W0115 05:44:31.378100 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.378686 kubelet[2755]: E0115 05:44:31.378325 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.378686 kubelet[2755]: E0115 05:44:31.378646 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.378686 kubelet[2755]: W0115 05:44:31.378660 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.379009 kubelet[2755]: E0115 05:44:31.378726 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.387276 kubelet[2755]: E0115 05:44:31.386991 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.387276 kubelet[2755]: W0115 05:44:31.387193 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.387276 kubelet[2755]: E0115 05:44:31.387271 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.394713 containerd[1603]: time="2026-01-15T05:44:31.392932960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7z7pn,Uid:633bdf6b-32ca-4a0a-a687-8f1c45d119c9,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:31.394807 kubelet[2755]: E0115 05:44:31.389847 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.394807 kubelet[2755]: W0115 05:44:31.389861 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.394807 kubelet[2755]: E0115 05:44:31.389876 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.394807 kubelet[2755]: E0115 05:44:31.390209 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:31.404487 kernel: audit: type=1327 audit(1768455871.329:527): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.404564 kernel: audit: type=1334 audit(1768455871.329:528): prog-id=153 op=LOAD Jan 15 05:44:31.329000 audit: BPF prog-id=153 op=LOAD Jan 15 05:44:31.406163 kernel: audit: type=1300 audit(1768455871.329:528): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.329000 audit[3209]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.329000 audit: BPF prog-id=154 op=LOAD Jan 15 05:44:31.329000 audit[3209]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.439820 kernel: audit: type=1327 audit(1768455871.329:528): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.329000 audit: BPF prog-id=154 op=UNLOAD Jan 15 05:44:31.329000 audit[3209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.329000 audit: BPF prog-id=153 op=UNLOAD Jan 15 05:44:31.329000 audit[3209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.329000 audit: BPF prog-id=155 op=LOAD Jan 15 05:44:31.329000 audit[3209]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3197 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530356131383234636165323531353139303363393661383339306536 Jan 15 05:44:31.448542 containerd[1603]: time="2026-01-15T05:44:31.448183527Z" level=info msg="connecting to shim f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2" address="unix:///run/containerd/s/be27b2dda89ee2a879bd5b2648e05ba54ebf7411721c707411c6a7e1648843df" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:31.463645 containerd[1603]: time="2026-01-15T05:44:31.463567031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797886cd65-h7h4q,Uid:cdf0a94f-9abb-4abe-a5de-d6c231b9c063,Namespace:calico-system,Attempt:0,} returns sandbox id \"e05a1824cae25151903c96a8390e635bed478d7984dcbed74a8ecddc63f71bc8\"" Jan 15 05:44:31.466135 kubelet[2755]: E0115 05:44:31.465989 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:31.470129 containerd[1603]: time="2026-01-15T05:44:31.470032622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 15 05:44:31.477575 kubelet[2755]: E0115 05:44:31.477261 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.477575 kubelet[2755]: W0115 05:44:31.477285 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.477575 kubelet[2755]: E0115 05:44:31.477307 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.482059 kubelet[2755]: E0115 05:44:31.480319 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.482059 kubelet[2755]: W0115 05:44:31.482038 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.482177 kubelet[2755]: E0115 05:44:31.482072 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.483584 kubelet[2755]: E0115 05:44:31.483384 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.483584 kubelet[2755]: W0115 05:44:31.483562 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.484567 kubelet[2755]: E0115 05:44:31.484331 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.485039 kubelet[2755]: E0115 05:44:31.484927 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.485039 kubelet[2755]: W0115 05:44:31.484970 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.485039 kubelet[2755]: E0115 05:44:31.485000 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.486768 kubelet[2755]: E0115 05:44:31.486510 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.486768 kubelet[2755]: W0115 05:44:31.486528 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.486768 kubelet[2755]: E0115 05:44:31.486549 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.487373 kubelet[2755]: E0115 05:44:31.487290 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.487373 kubelet[2755]: W0115 05:44:31.487336 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.487811 kubelet[2755]: E0115 05:44:31.487740 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.487973 kubelet[2755]: E0115 05:44:31.487863 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.487973 kubelet[2755]: W0115 05:44:31.487956 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.488167 kubelet[2755]: E0115 05:44:31.488108 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.488294 kubelet[2755]: E0115 05:44:31.488240 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.488294 kubelet[2755]: W0115 05:44:31.488279 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.488538 kubelet[2755]: E0115 05:44:31.488501 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.488920 kubelet[2755]: E0115 05:44:31.488894 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.488920 kubelet[2755]: W0115 05:44:31.488905 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.489219 kubelet[2755]: E0115 05:44:31.489141 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.489742 kubelet[2755]: E0115 05:44:31.489684 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.489742 kubelet[2755]: W0115 05:44:31.489722 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.490129 kubelet[2755]: E0115 05:44:31.490050 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.490800 kubelet[2755]: E0115 05:44:31.490732 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.490800 kubelet[2755]: W0115 05:44:31.490774 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.491006 kubelet[2755]: E0115 05:44:31.490934 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.491364 kubelet[2755]: E0115 05:44:31.491292 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.491364 kubelet[2755]: W0115 05:44:31.491326 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.491696 kubelet[2755]: E0115 05:44:31.491624 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.492998 kubelet[2755]: E0115 05:44:31.492925 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.492998 kubelet[2755]: W0115 05:44:31.492970 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.493102 kubelet[2755]: E0115 05:44:31.493052 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.493560 kubelet[2755]: E0115 05:44:31.493367 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.493606 kubelet[2755]: W0115 05:44:31.493556 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.493779 kubelet[2755]: E0115 05:44:31.493728 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.494668 kubelet[2755]: E0115 05:44:31.494277 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.494668 kubelet[2755]: W0115 05:44:31.494318 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.494668 kubelet[2755]: E0115 05:44:31.494384 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.494835 kubelet[2755]: E0115 05:44:31.494787 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.494835 kubelet[2755]: W0115 05:44:31.494825 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.495054 kubelet[2755]: E0115 05:44:31.494938 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.495148 kubelet[2755]: E0115 05:44:31.495108 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.495148 kubelet[2755]: W0115 05:44:31.495137 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.495303 kubelet[2755]: E0115 05:44:31.495260 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.497516 kubelet[2755]: E0115 05:44:31.495816 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.497516 kubelet[2755]: W0115 05:44:31.495830 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.497516 kubelet[2755]: E0115 05:44:31.496133 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.497516 kubelet[2755]: E0115 05:44:31.496356 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.497516 kubelet[2755]: W0115 05:44:31.496364 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.497516 kubelet[2755]: E0115 05:44:31.496804 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.497516 kubelet[2755]: E0115 05:44:31.497143 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.497516 kubelet[2755]: W0115 05:44:31.497151 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.497516 kubelet[2755]: E0115 05:44:31.497227 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.498011 kubelet[2755]: E0115 05:44:31.497822 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.498011 kubelet[2755]: W0115 05:44:31.497833 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.498011 kubelet[2755]: E0115 05:44:31.497978 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.498688 kubelet[2755]: E0115 05:44:31.498676 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.498749 kubelet[2755]: W0115 05:44:31.498738 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.498940 kubelet[2755]: E0115 05:44:31.498864 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.499500 kubelet[2755]: E0115 05:44:31.499353 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.499500 kubelet[2755]: W0115 05:44:31.499364 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.499672 kubelet[2755]: E0115 05:44:31.499660 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.499990 kubelet[2755]: E0115 05:44:31.499979 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.500041 kubelet[2755]: W0115 05:44:31.500031 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.500169 kubelet[2755]: E0115 05:44:31.500158 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.502477 kubelet[2755]: E0115 05:44:31.502346 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.502477 kubelet[2755]: W0115 05:44:31.502357 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.502477 kubelet[2755]: E0115 05:44:31.502367 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.505719 kubelet[2755]: E0115 05:44:31.505533 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:31.505795 kubelet[2755]: W0115 05:44:31.505782 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:31.505841 kubelet[2755]: E0115 05:44:31.505831 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:31.506680 systemd[1]: Started cri-containerd-f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2.scope - libcontainer container f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2. Jan 15 05:44:31.528000 audit: BPF prog-id=156 op=LOAD Jan 15 05:44:31.529000 audit: BPF prog-id=157 op=LOAD Jan 15 05:44:31.529000 audit[3304]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3288 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613837343866323836643630323434386433646262633363363361 Jan 15 05:44:31.530000 audit: BPF prog-id=157 op=UNLOAD Jan 15 05:44:31.530000 audit[3304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613837343866323836643630323434386433646262633363363361 Jan 15 05:44:31.531000 audit: BPF prog-id=158 op=LOAD Jan 15 05:44:31.531000 audit[3304]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3288 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613837343866323836643630323434386433646262633363363361 Jan 15 05:44:31.531000 audit: BPF prog-id=159 op=LOAD Jan 15 05:44:31.531000 audit[3304]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3288 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613837343866323836643630323434386433646262633363363361 Jan 15 05:44:31.531000 audit: BPF prog-id=159 op=UNLOAD Jan 15 05:44:31.531000 audit[3304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613837343866323836643630323434386433646262633363363361 Jan 15 05:44:31.531000 audit: BPF prog-id=158 op=UNLOAD Jan 15 05:44:31.531000 audit[3304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613837343866323836643630323434386433646262633363363361 Jan 15 05:44:31.531000 audit: BPF prog-id=160 op=LOAD Jan 15 05:44:31.531000 audit[3304]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3288 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631613837343866323836643630323434386433646262633363363361 Jan 15 05:44:31.605075 containerd[1603]: time="2026-01-15T05:44:31.605012188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7z7pn,Uid:633bdf6b-32ca-4a0a-a687-8f1c45d119c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2\"" Jan 15 05:44:31.606187 kubelet[2755]: E0115 05:44:31.606143 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:31.901000 audit[3356]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:31.901000 audit[3356]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff8203a410 a2=0 a3=7fff8203a3fc items=0 ppid=2864 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:31.916000 audit[3356]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:31.916000 audit[3356]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8203a410 a2=0 a3=0 items=0 ppid=2864 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:31.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:32.115942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763080051.mount: Deactivated successfully. Jan 15 05:44:32.522306 kubelet[2755]: E0115 05:44:32.522212 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:44:32.694170 containerd[1603]: time="2026-01-15T05:44:32.694095578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:32.695506 containerd[1603]: time="2026-01-15T05:44:32.695285163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:32.696795 containerd[1603]: time="2026-01-15T05:44:32.696743184Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:32.699901 containerd[1603]: time="2026-01-15T05:44:32.699828172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:32.701064 containerd[1603]: time="2026-01-15T05:44:32.700837587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.230741879s" Jan 15 05:44:32.701064 containerd[1603]: time="2026-01-15T05:44:32.700865749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 15 05:44:32.703172 containerd[1603]: time="2026-01-15T05:44:32.703150733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 15 05:44:32.717849 containerd[1603]: time="2026-01-15T05:44:32.717741602Z" level=info msg="CreateContainer within sandbox \"e05a1824cae25151903c96a8390e635bed478d7984dcbed74a8ecddc63f71bc8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 05:44:32.728701 containerd[1603]: time="2026-01-15T05:44:32.728615245Z" level=info msg="Container 8166fcb698b0d1a30750e1de1a5d5e0cdae3d86e5967892e5a0c279d2813408b: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:32.741303 containerd[1603]: time="2026-01-15T05:44:32.741161843Z" level=info msg="CreateContainer within sandbox \"e05a1824cae25151903c96a8390e635bed478d7984dcbed74a8ecddc63f71bc8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8166fcb698b0d1a30750e1de1a5d5e0cdae3d86e5967892e5a0c279d2813408b\"" Jan 15 05:44:32.742534 containerd[1603]: time="2026-01-15T05:44:32.742357770Z" level=info msg="StartContainer for \"8166fcb698b0d1a30750e1de1a5d5e0cdae3d86e5967892e5a0c279d2813408b\"" Jan 15 05:44:32.744230 containerd[1603]: time="2026-01-15T05:44:32.744063095Z" level=info msg="connecting to shim 8166fcb698b0d1a30750e1de1a5d5e0cdae3d86e5967892e5a0c279d2813408b" address="unix:///run/containerd/s/66b58245ae47f4006b9cc7d87ca581142280d811313a05810de4d2ec250e6b8d" protocol=ttrpc version=3 Jan 15 05:44:32.781699 systemd[1]: Started cri-containerd-8166fcb698b0d1a30750e1de1a5d5e0cdae3d86e5967892e5a0c279d2813408b.scope - libcontainer container 8166fcb698b0d1a30750e1de1a5d5e0cdae3d86e5967892e5a0c279d2813408b. Jan 15 05:44:32.801000 audit: BPF prog-id=161 op=LOAD Jan 15 05:44:32.801000 audit: BPF prog-id=162 op=LOAD Jan 15 05:44:32.801000 audit[3367]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3197 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:32.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666636236393862306431613330373530653164653161356435 Jan 15 05:44:32.801000 audit: BPF prog-id=162 op=UNLOAD Jan 15 05:44:32.801000 audit[3367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3197 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:32.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666636236393862306431613330373530653164653161356435 Jan 15 05:44:32.802000 audit: BPF prog-id=163 op=LOAD Jan 15 05:44:32.802000 audit[3367]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3197 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:32.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666636236393862306431613330373530653164653161356435 Jan 15 05:44:32.802000 audit: BPF prog-id=164 op=LOAD Jan 15 05:44:32.802000 audit[3367]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3197 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:32.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666636236393862306431613330373530653164653161356435 Jan 15 05:44:32.802000 audit: BPF prog-id=164 op=UNLOAD Jan 15 05:44:32.802000 audit[3367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3197 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:32.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666636236393862306431613330373530653164653161356435 Jan 15 05:44:32.802000 audit: BPF prog-id=163 op=UNLOAD Jan 15 05:44:32.802000 audit[3367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3197 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:32.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666636236393862306431613330373530653164653161356435 Jan 15 05:44:32.802000 audit: BPF prog-id=165 op=LOAD Jan 15 05:44:32.802000 audit[3367]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3197 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:32.802000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831363666636236393862306431613330373530653164653161356435 Jan 15 05:44:32.852768 containerd[1603]: time="2026-01-15T05:44:32.852615436Z" level=info msg="StartContainer for \"8166fcb698b0d1a30750e1de1a5d5e0cdae3d86e5967892e5a0c279d2813408b\" returns successfully" Jan 15 05:44:33.631078 kubelet[2755]: E0115 05:44:33.631000 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:33.676014 kubelet[2755]: E0115 05:44:33.675940 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.676014 kubelet[2755]: W0115 05:44:33.675991 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.676014 kubelet[2755]: E0115 05:44:33.676009 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.678749 kubelet[2755]: E0115 05:44:33.678694 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.678749 kubelet[2755]: W0115 05:44:33.678740 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.678847 kubelet[2755]: E0115 05:44:33.678757 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.679951 kubelet[2755]: E0115 05:44:33.679698 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.679951 kubelet[2755]: W0115 05:44:33.679715 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.679951 kubelet[2755]: E0115 05:44:33.679728 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.680252 kubelet[2755]: E0115 05:44:33.680193 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.680252 kubelet[2755]: W0115 05:44:33.680206 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.680252 kubelet[2755]: E0115 05:44:33.680215 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.682669 kubelet[2755]: I0115 05:44:33.682519 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-797886cd65-h7h4q" podStartSLOduration=2.448984679 podStartE2EDuration="3.682502644s" podCreationTimestamp="2026-01-15 05:44:30 +0000 UTC" firstStartedPulling="2026-01-15 05:44:31.469629927 +0000 UTC m=+19.067672456" lastFinishedPulling="2026-01-15 05:44:32.703147881 +0000 UTC m=+20.301190421" observedRunningTime="2026-01-15 05:44:33.662242211 +0000 UTC m=+21.260284741" watchObservedRunningTime="2026-01-15 05:44:33.682502644 +0000 UTC m=+21.280545175" Jan 15 05:44:33.683499 kubelet[2755]: E0115 05:44:33.682953 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.683499 kubelet[2755]: W0115 05:44:33.682969 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.683499 kubelet[2755]: E0115 05:44:33.682979 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.684205 kubelet[2755]: E0115 05:44:33.684136 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.684205 kubelet[2755]: W0115 05:44:33.684174 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.684205 kubelet[2755]: E0115 05:44:33.684183 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.684639 kubelet[2755]: E0115 05:44:33.684593 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.684639 kubelet[2755]: W0115 05:44:33.684630 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.684639 kubelet[2755]: E0115 05:44:33.684639 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.685109 kubelet[2755]: E0115 05:44:33.684832 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.685109 kubelet[2755]: W0115 05:44:33.684840 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.685109 kubelet[2755]: E0115 05:44:33.684847 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.685109 kubelet[2755]: E0115 05:44:33.685024 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.685109 kubelet[2755]: W0115 05:44:33.685031 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.685109 kubelet[2755]: E0115 05:44:33.685039 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.686179 kubelet[2755]: E0115 05:44:33.685311 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.686179 kubelet[2755]: W0115 05:44:33.685327 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.686179 kubelet[2755]: E0115 05:44:33.685336 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.686179 kubelet[2755]: E0115 05:44:33.685754 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.686179 kubelet[2755]: W0115 05:44:33.685763 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.686179 kubelet[2755]: E0115 05:44:33.685771 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.686179 kubelet[2755]: E0115 05:44:33.686159 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.686179 kubelet[2755]: W0115 05:44:33.686167 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.686476 kubelet[2755]: E0115 05:44:33.686175 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.687125 kubelet[2755]: E0115 05:44:33.687084 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.687125 kubelet[2755]: W0115 05:44:33.687124 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.687194 kubelet[2755]: E0115 05:44:33.687136 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.689133 kubelet[2755]: E0115 05:44:33.688906 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.689133 kubelet[2755]: W0115 05:44:33.688921 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.689133 kubelet[2755]: E0115 05:44:33.688932 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.689626 kubelet[2755]: E0115 05:44:33.689572 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.689626 kubelet[2755]: W0115 05:44:33.689610 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.689626 kubelet[2755]: E0115 05:44:33.689619 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.711000 audit[3432]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:33.711000 audit[3432]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc6d005490 a2=0 a3=7ffc6d00547c items=0 ppid=2864 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:33.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:33.715000 audit[3432]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:33.715000 audit[3432]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc6d005490 a2=0 a3=7ffc6d00547c items=0 ppid=2864 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:33.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:33.717972 kubelet[2755]: E0115 05:44:33.717809 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.717972 kubelet[2755]: W0115 05:44:33.717849 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.717972 kubelet[2755]: E0115 05:44:33.717867 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.718592 kubelet[2755]: E0115 05:44:33.718360 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.718774 kubelet[2755]: W0115 05:44:33.718606 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.718966 kubelet[2755]: E0115 05:44:33.718866 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.720499 kubelet[2755]: E0115 05:44:33.720335 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.720499 kubelet[2755]: W0115 05:44:33.720373 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.720886 kubelet[2755]: E0115 05:44:33.720811 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.721839 kubelet[2755]: E0115 05:44:33.721772 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.721839 kubelet[2755]: W0115 05:44:33.721824 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.722697 kubelet[2755]: E0115 05:44:33.722557 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.723245 kubelet[2755]: E0115 05:44:33.723141 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.723245 kubelet[2755]: W0115 05:44:33.723177 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.723779 kubelet[2755]: E0115 05:44:33.723579 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.725190 kubelet[2755]: E0115 05:44:33.725144 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.725190 kubelet[2755]: W0115 05:44:33.725185 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.725591 kubelet[2755]: E0115 05:44:33.725541 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.725998 kubelet[2755]: E0115 05:44:33.725935 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.725998 kubelet[2755]: W0115 05:44:33.725974 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.726388 kubelet[2755]: E0115 05:44:33.726232 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.727503 kubelet[2755]: E0115 05:44:33.727338 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.727503 kubelet[2755]: W0115 05:44:33.727369 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.728044 kubelet[2755]: E0115 05:44:33.727964 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.729593 kubelet[2755]: E0115 05:44:33.729502 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.729593 kubelet[2755]: W0115 05:44:33.729534 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.729663 kubelet[2755]: E0115 05:44:33.729607 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.729975 kubelet[2755]: E0115 05:44:33.729878 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.729975 kubelet[2755]: W0115 05:44:33.729914 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.730037 kubelet[2755]: E0115 05:44:33.729979 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.730628 kubelet[2755]: E0115 05:44:33.730562 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.730628 kubelet[2755]: W0115 05:44:33.730614 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.730818 kubelet[2755]: E0115 05:44:33.730778 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.731270 kubelet[2755]: E0115 05:44:33.731233 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.731270 kubelet[2755]: W0115 05:44:33.731252 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.731936 kubelet[2755]: E0115 05:44:33.731385 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.732343 kubelet[2755]: E0115 05:44:33.732257 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.732551 kubelet[2755]: W0115 05:44:33.732374 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.732551 kubelet[2755]: E0115 05:44:33.732545 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.732847 kubelet[2755]: E0115 05:44:33.732796 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.732847 kubelet[2755]: W0115 05:44:33.732840 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.733004 kubelet[2755]: E0115 05:44:33.732959 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.733748 kubelet[2755]: E0115 05:44:33.733597 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.733748 kubelet[2755]: W0115 05:44:33.733610 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.733748 kubelet[2755]: E0115 05:44:33.733709 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.734281 kubelet[2755]: E0115 05:44:33.734262 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.734281 kubelet[2755]: W0115 05:44:33.734275 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.734281 kubelet[2755]: E0115 05:44:33.734309 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.734793 kubelet[2755]: E0115 05:44:33.734772 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.734793 kubelet[2755]: W0115 05:44:33.734792 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.734852 kubelet[2755]: E0115 05:44:33.734841 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.735577 kubelet[2755]: E0115 05:44:33.735496 2755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:44:33.735577 kubelet[2755]: W0115 05:44:33.735509 2755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:44:33.735577 kubelet[2755]: E0115 05:44:33.735518 2755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:44:33.790660 containerd[1603]: time="2026-01-15T05:44:33.790607737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:33.791911 containerd[1603]: time="2026-01-15T05:44:33.791611817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 15 05:44:33.793834 containerd[1603]: time="2026-01-15T05:44:33.793788745Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:33.797361 containerd[1603]: time="2026-01-15T05:44:33.796686972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:33.797361 containerd[1603]: time="2026-01-15T05:44:33.797258401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.094005097s" Jan 15 05:44:33.797361 containerd[1603]: time="2026-01-15T05:44:33.797285151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 15 05:44:33.801274 containerd[1603]: time="2026-01-15T05:44:33.801113444Z" level=info msg="CreateContainer within sandbox \"f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 05:44:33.816538 containerd[1603]: time="2026-01-15T05:44:33.815335517Z" level=info msg="Container 6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:33.824924 containerd[1603]: time="2026-01-15T05:44:33.824850371Z" level=info msg="CreateContainer within sandbox \"f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777\"" Jan 15 05:44:33.825862 containerd[1603]: time="2026-01-15T05:44:33.825563171Z" level=info msg="StartContainer for \"6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777\"" Jan 15 05:44:33.827641 containerd[1603]: time="2026-01-15T05:44:33.827605473Z" level=info msg="connecting to shim 6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777" address="unix:///run/containerd/s/be27b2dda89ee2a879bd5b2648e05ba54ebf7411721c707411c6a7e1648843df" protocol=ttrpc version=3 Jan 15 05:44:33.861911 systemd[1]: Started cri-containerd-6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777.scope - libcontainer container 6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777. Jan 15 05:44:33.936000 audit: BPF prog-id=166 op=LOAD Jan 15 05:44:33.936000 audit[3451]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3288 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:33.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662393931343032633730373735333636386265343861303263353631 Jan 15 05:44:33.936000 audit: BPF prog-id=167 op=LOAD Jan 15 05:44:33.936000 audit[3451]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3288 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:33.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662393931343032633730373735333636386265343861303263353631 Jan 15 05:44:33.936000 audit: BPF prog-id=167 op=UNLOAD Jan 15 05:44:33.936000 audit[3451]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:33.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662393931343032633730373735333636386265343861303263353631 Jan 15 05:44:33.936000 audit: BPF prog-id=166 op=UNLOAD Jan 15 05:44:33.936000 audit[3451]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:33.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662393931343032633730373735333636386265343861303263353631 Jan 15 05:44:33.936000 audit: BPF prog-id=168 op=LOAD Jan 15 05:44:33.936000 audit[3451]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3288 pid=3451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:33.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662393931343032633730373735333636386265343861303263353631 Jan 15 05:44:33.973991 containerd[1603]: time="2026-01-15T05:44:33.973942707Z" level=info msg="StartContainer for \"6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777\" returns successfully" Jan 15 05:44:34.003079 systemd[1]: cri-containerd-6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777.scope: Deactivated successfully. Jan 15 05:44:34.004054 systemd[1]: cri-containerd-6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777.scope: Consumed 55ms CPU time, 6.3M memory peak, 2.8M written to disk. Jan 15 05:44:34.010698 containerd[1603]: time="2026-01-15T05:44:34.010656613Z" level=info msg="received container exit event container_id:\"6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777\" id:\"6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777\" pid:3466 exited_at:{seconds:1768455874 nanos:9882050}" Jan 15 05:44:34.021000 audit: BPF prog-id=168 op=UNLOAD Jan 15 05:44:34.047220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b991402c707753668be48a02c561d0fea25042280f090769ee1c886c6f1d777-rootfs.mount: Deactivated successfully. Jan 15 05:44:34.522252 kubelet[2755]: E0115 05:44:34.522125 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:44:34.638053 kubelet[2755]: E0115 05:44:34.637702 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:34.639122 kubelet[2755]: E0115 05:44:34.638721 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:34.640072 containerd[1603]: time="2026-01-15T05:44:34.640014574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 15 05:44:35.639988 kubelet[2755]: E0115 05:44:35.639869 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:36.522370 kubelet[2755]: E0115 05:44:36.522217 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:44:36.775375 containerd[1603]: time="2026-01-15T05:44:36.775164222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:36.777540 containerd[1603]: time="2026-01-15T05:44:36.777491256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 15 05:44:36.779086 containerd[1603]: time="2026-01-15T05:44:36.779006188Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:36.781910 containerd[1603]: time="2026-01-15T05:44:36.781828877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:36.782837 containerd[1603]: time="2026-01-15T05:44:36.782770212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.142721024s" Jan 15 05:44:36.782837 containerd[1603]: time="2026-01-15T05:44:36.782833941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 15 05:44:36.786482 containerd[1603]: time="2026-01-15T05:44:36.786336124Z" level=info msg="CreateContainer within sandbox \"f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 05:44:36.802510 containerd[1603]: time="2026-01-15T05:44:36.800984719Z" level=info msg="Container ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:36.818551 containerd[1603]: time="2026-01-15T05:44:36.818222165Z" level=info msg="CreateContainer within sandbox \"f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80\"" Jan 15 05:44:36.820498 containerd[1603]: time="2026-01-15T05:44:36.819532166Z" level=info msg="StartContainer for \"ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80\"" Jan 15 05:44:36.821735 containerd[1603]: time="2026-01-15T05:44:36.821626178Z" level=info msg="connecting to shim ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80" address="unix:///run/containerd/s/be27b2dda89ee2a879bd5b2648e05ba54ebf7411721c707411c6a7e1648843df" protocol=ttrpc version=3 Jan 15 05:44:36.864746 systemd[1]: Started cri-containerd-ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80.scope - libcontainer container ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80. Jan 15 05:44:36.963000 audit: BPF prog-id=169 op=LOAD Jan 15 05:44:36.967592 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 15 05:44:36.967712 kernel: audit: type=1334 audit(1768455876.963:559): prog-id=169 op=LOAD Jan 15 05:44:36.963000 audit[3513]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3288 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:36.989665 kernel: audit: type=1300 audit(1768455876.963:559): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3288 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:36.989862 kernel: audit: type=1327 audit(1768455876.963:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356666326331653932646262363839626462633139303236353130 Jan 15 05:44:36.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356666326331653932646262363839626462633139303236353130 Jan 15 05:44:36.963000 audit: BPF prog-id=170 op=LOAD Jan 15 05:44:37.008917 kernel: audit: type=1334 audit(1768455876.963:560): prog-id=170 op=LOAD Jan 15 05:44:36.963000 audit[3513]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3288 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:37.025274 kernel: audit: type=1300 audit(1768455876.963:560): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3288 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:37.025341 kernel: audit: type=1327 audit(1768455876.963:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356666326331653932646262363839626462633139303236353130 Jan 15 05:44:36.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356666326331653932646262363839626462633139303236353130 Jan 15 05:44:37.041854 kernel: audit: type=1334 audit(1768455876.964:561): prog-id=170 op=UNLOAD Jan 15 05:44:36.964000 audit: BPF prog-id=170 op=UNLOAD Jan 15 05:44:37.051019 kernel: audit: type=1300 audit(1768455876.964:561): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:36.964000 audit[3513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:37.067198 kernel: audit: type=1327 audit(1768455876.964:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356666326331653932646262363839626462633139303236353130 Jan 15 05:44:36.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356666326331653932646262363839626462633139303236353130 Jan 15 05:44:37.082325 kernel: audit: type=1334 audit(1768455876.964:562): prog-id=169 op=UNLOAD Jan 15 05:44:36.964000 audit: BPF prog-id=169 op=UNLOAD Jan 15 05:44:36.964000 audit[3513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:36.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356666326331653932646262363839626462633139303236353130 Jan 15 05:44:36.964000 audit: BPF prog-id=171 op=LOAD Jan 15 05:44:36.964000 audit[3513]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3288 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:36.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163356666326331653932646262363839626462633139303236353130 Jan 15 05:44:37.086873 containerd[1603]: time="2026-01-15T05:44:37.085671272Z" level=info msg="StartContainer for \"ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80\" returns successfully" Jan 15 05:44:37.652562 kubelet[2755]: E0115 05:44:37.652380 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:38.293571 systemd[1]: cri-containerd-ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80.scope: Deactivated successfully. Jan 15 05:44:38.294103 systemd[1]: cri-containerd-ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80.scope: Consumed 1.420s CPU time, 181.6M memory peak, 3.9M read from disk, 171.3M written to disk. Jan 15 05:44:38.299831 containerd[1603]: time="2026-01-15T05:44:38.299783733Z" level=info msg="received container exit event container_id:\"ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80\" id:\"ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80\" pid:3526 exited_at:{seconds:1768455878 nanos:299274824}" Jan 15 05:44:38.301000 audit: BPF prog-id=171 op=UNLOAD Jan 15 05:44:38.350293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac5ff2c1e92dbb689bdbc190265109094275491403b9a77aaf75b8c613658e80-rootfs.mount: Deactivated successfully. Jan 15 05:44:38.407693 kubelet[2755]: I0115 05:44:38.407625 2755 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 05:44:38.467042 systemd[1]: Created slice kubepods-besteffort-podbad082bb_9731_4bff_b64e_9964fb68119a.slice - libcontainer container kubepods-besteffort-podbad082bb_9731_4bff_b64e_9964fb68119a.slice. Jan 15 05:44:38.481222 systemd[1]: Created slice kubepods-burstable-pod37ca1d96_eda1_47b6_b5e5_48121189a9c0.slice - libcontainer container kubepods-burstable-pod37ca1d96_eda1_47b6_b5e5_48121189a9c0.slice. Jan 15 05:44:38.493242 systemd[1]: Created slice kubepods-besteffort-pod647d95db_9ea2_4c12_b24e_24a6a7b2ddc1.slice - libcontainer container kubepods-besteffort-pod647d95db_9ea2_4c12_b24e_24a6a7b2ddc1.slice. Jan 15 05:44:38.505037 systemd[1]: Created slice kubepods-besteffort-pod1c7a473d_fbbd_41be_961a_cb9f606fd6ff.slice - libcontainer container kubepods-besteffort-pod1c7a473d_fbbd_41be_961a_cb9f606fd6ff.slice. Jan 15 05:44:38.517264 systemd[1]: Created slice kubepods-besteffort-podc9216aea_9a46_4a8f_81a9_8d30cdf7722b.slice - libcontainer container kubepods-besteffort-podc9216aea_9a46_4a8f_81a9_8d30cdf7722b.slice. Jan 15 05:44:38.530345 systemd[1]: Created slice kubepods-burstable-pod5e1c0e06_5a78_43e6_a8cc_cfe663be0279.slice - libcontainer container kubepods-burstable-pod5e1c0e06_5a78_43e6_a8cc_cfe663be0279.slice. Jan 15 05:44:38.538655 systemd[1]: Created slice kubepods-besteffort-pod2e111366_f223_4b0d_ad74_a8fd8f32e679.slice - libcontainer container kubepods-besteffort-pod2e111366_f223_4b0d_ad74_a8fd8f32e679.slice. Jan 15 05:44:38.546151 systemd[1]: Created slice kubepods-besteffort-pode9ba42b2_88e3_4065_bd62_0b6bb90b29e9.slice - libcontainer container kubepods-besteffort-pode9ba42b2_88e3_4065_bd62_0b6bb90b29e9.slice. Jan 15 05:44:38.549629 containerd[1603]: time="2026-01-15T05:44:38.549386875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzlc6,Uid:e9ba42b2-88e3-4065-bd62-0b6bb90b29e9,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:38.567970 kubelet[2755]: I0115 05:44:38.567756 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6cn\" (UniqueName: \"kubernetes.io/projected/bad082bb-9731-4bff-b64e-9964fb68119a-kube-api-access-6x6cn\") pod \"calico-apiserver-766d8cc98b-vvx8x\" (UID: \"bad082bb-9731-4bff-b64e-9964fb68119a\") " pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" Jan 15 05:44:38.567970 kubelet[2755]: I0115 05:44:38.567832 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e111366-f223-4b0d-ad74-a8fd8f32e679-whisker-ca-bundle\") pod \"whisker-7c478c946b-q256m\" (UID: \"2e111366-f223-4b0d-ad74-a8fd8f32e679\") " pod="calico-system/whisker-7c478c946b-q256m" Jan 15 05:44:38.567970 kubelet[2755]: I0115 05:44:38.567851 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bad082bb-9731-4bff-b64e-9964fb68119a-calico-apiserver-certs\") pod \"calico-apiserver-766d8cc98b-vvx8x\" (UID: \"bad082bb-9731-4bff-b64e-9964fb68119a\") " pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" Jan 15 05:44:38.567970 kubelet[2755]: I0115 05:44:38.567872 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9216aea-9a46-4a8f-81a9-8d30cdf7722b-tigera-ca-bundle\") pod \"calico-kube-controllers-c8cc478f5-z859x\" (UID: \"c9216aea-9a46-4a8f-81a9-8d30cdf7722b\") " pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" Jan 15 05:44:38.568499 kubelet[2755]: I0115 05:44:38.568221 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr9vj\" (UniqueName: \"kubernetes.io/projected/1c7a473d-fbbd-41be-961a-cb9f606fd6ff-kube-api-access-cr9vj\") pod \"calico-apiserver-766d8cc98b-k6vdp\" (UID: \"1c7a473d-fbbd-41be-961a-cb9f606fd6ff\") " pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" Jan 15 05:44:38.568554 kubelet[2755]: I0115 05:44:38.568391 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37ca1d96-eda1-47b6-b5e5-48121189a9c0-config-volume\") pod \"coredns-668d6bf9bc-wdqdl\" (UID: \"37ca1d96-eda1-47b6-b5e5-48121189a9c0\") " pod="kube-system/coredns-668d6bf9bc-wdqdl" Jan 15 05:44:38.568650 kubelet[2755]: I0115 05:44:38.568550 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4cfz\" (UniqueName: \"kubernetes.io/projected/37ca1d96-eda1-47b6-b5e5-48121189a9c0-kube-api-access-v4cfz\") pod \"coredns-668d6bf9bc-wdqdl\" (UID: \"37ca1d96-eda1-47b6-b5e5-48121189a9c0\") " pod="kube-system/coredns-668d6bf9bc-wdqdl" Jan 15 05:44:38.568650 kubelet[2755]: I0115 05:44:38.568577 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/647d95db-9ea2-4c12-b24e-24a6a7b2ddc1-goldmane-ca-bundle\") pod \"goldmane-666569f655-q55xd\" (UID: \"647d95db-9ea2-4c12-b24e-24a6a7b2ddc1\") " pod="calico-system/goldmane-666569f655-q55xd" Jan 15 05:44:38.568650 kubelet[2755]: I0115 05:44:38.568604 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn8fk\" (UniqueName: \"kubernetes.io/projected/647d95db-9ea2-4c12-b24e-24a6a7b2ddc1-kube-api-access-jn8fk\") pod \"goldmane-666569f655-q55xd\" (UID: \"647d95db-9ea2-4c12-b24e-24a6a7b2ddc1\") " pod="calico-system/goldmane-666569f655-q55xd" Jan 15 05:44:38.568650 kubelet[2755]: I0115 05:44:38.568637 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxmb\" (UniqueName: \"kubernetes.io/projected/c9216aea-9a46-4a8f-81a9-8d30cdf7722b-kube-api-access-jgxmb\") pod \"calico-kube-controllers-c8cc478f5-z859x\" (UID: \"c9216aea-9a46-4a8f-81a9-8d30cdf7722b\") " pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" Jan 15 05:44:38.568863 kubelet[2755]: I0115 05:44:38.568664 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rll4\" (UniqueName: \"kubernetes.io/projected/5e1c0e06-5a78-43e6-a8cc-cfe663be0279-kube-api-access-5rll4\") pod \"coredns-668d6bf9bc-wqfbb\" (UID: \"5e1c0e06-5a78-43e6-a8cc-cfe663be0279\") " pod="kube-system/coredns-668d6bf9bc-wqfbb" Jan 15 05:44:38.568863 kubelet[2755]: I0115 05:44:38.568692 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c7a473d-fbbd-41be-961a-cb9f606fd6ff-calico-apiserver-certs\") pod \"calico-apiserver-766d8cc98b-k6vdp\" (UID: \"1c7a473d-fbbd-41be-961a-cb9f606fd6ff\") " pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" Jan 15 05:44:38.568863 kubelet[2755]: I0115 05:44:38.568719 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647d95db-9ea2-4c12-b24e-24a6a7b2ddc1-config\") pod \"goldmane-666569f655-q55xd\" (UID: \"647d95db-9ea2-4c12-b24e-24a6a7b2ddc1\") " pod="calico-system/goldmane-666569f655-q55xd" Jan 15 05:44:38.568863 kubelet[2755]: I0115 05:44:38.568748 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/647d95db-9ea2-4c12-b24e-24a6a7b2ddc1-goldmane-key-pair\") pod \"goldmane-666569f655-q55xd\" (UID: \"647d95db-9ea2-4c12-b24e-24a6a7b2ddc1\") " pod="calico-system/goldmane-666569f655-q55xd" Jan 15 05:44:38.568863 kubelet[2755]: I0115 05:44:38.568774 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e111366-f223-4b0d-ad74-a8fd8f32e679-whisker-backend-key-pair\") pod \"whisker-7c478c946b-q256m\" (UID: \"2e111366-f223-4b0d-ad74-a8fd8f32e679\") " pod="calico-system/whisker-7c478c946b-q256m" Jan 15 05:44:38.569073 kubelet[2755]: I0115 05:44:38.568795 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzd5\" (UniqueName: \"kubernetes.io/projected/2e111366-f223-4b0d-ad74-a8fd8f32e679-kube-api-access-7gzd5\") pod \"whisker-7c478c946b-q256m\" (UID: \"2e111366-f223-4b0d-ad74-a8fd8f32e679\") " pod="calico-system/whisker-7c478c946b-q256m" Jan 15 05:44:38.569073 kubelet[2755]: I0115 05:44:38.568812 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e1c0e06-5a78-43e6-a8cc-cfe663be0279-config-volume\") pod \"coredns-668d6bf9bc-wqfbb\" (UID: \"5e1c0e06-5a78-43e6-a8cc-cfe663be0279\") " pod="kube-system/coredns-668d6bf9bc-wqfbb" Jan 15 05:44:38.659707 kubelet[2755]: E0115 05:44:38.659625 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:38.663532 containerd[1603]: time="2026-01-15T05:44:38.662172437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 15 05:44:38.740003 containerd[1603]: time="2026-01-15T05:44:38.739944096Z" level=error msg="Failed to destroy network for sandbox \"3ebc960bd72fc4706bc11d9fb0e37e9475e95aa0777773444691e1355aa0a940\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.745196 containerd[1603]: time="2026-01-15T05:44:38.745102567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzlc6,Uid:e9ba42b2-88e3-4065-bd62-0b6bb90b29e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebc960bd72fc4706bc11d9fb0e37e9475e95aa0777773444691e1355aa0a940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.745730 kubelet[2755]: E0115 05:44:38.745623 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebc960bd72fc4706bc11d9fb0e37e9475e95aa0777773444691e1355aa0a940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.745730 kubelet[2755]: E0115 05:44:38.745728 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebc960bd72fc4706bc11d9fb0e37e9475e95aa0777773444691e1355aa0a940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nzlc6" Jan 15 05:44:38.745854 kubelet[2755]: E0115 05:44:38.745749 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ebc960bd72fc4706bc11d9fb0e37e9475e95aa0777773444691e1355aa0a940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nzlc6" Jan 15 05:44:38.745854 kubelet[2755]: E0115 05:44:38.745816 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nzlc6_calico-system(e9ba42b2-88e3-4065-bd62-0b6bb90b29e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nzlc6_calico-system(e9ba42b2-88e3-4065-bd62-0b6bb90b29e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ebc960bd72fc4706bc11d9fb0e37e9475e95aa0777773444691e1355aa0a940\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:44:38.779741 containerd[1603]: time="2026-01-15T05:44:38.779676394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766d8cc98b-vvx8x,Uid:bad082bb-9731-4bff-b64e-9964fb68119a,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:44:38.790511 kubelet[2755]: E0115 05:44:38.789568 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:38.790823 containerd[1603]: time="2026-01-15T05:44:38.790708842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wdqdl,Uid:37ca1d96-eda1-47b6-b5e5-48121189a9c0,Namespace:kube-system,Attempt:0,}" Jan 15 05:44:38.800730 containerd[1603]: time="2026-01-15T05:44:38.799933395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q55xd,Uid:647d95db-9ea2-4c12-b24e-24a6a7b2ddc1,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:38.811619 containerd[1603]: time="2026-01-15T05:44:38.811484716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766d8cc98b-k6vdp,Uid:1c7a473d-fbbd-41be-961a-cb9f606fd6ff,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:44:38.830697 containerd[1603]: time="2026-01-15T05:44:38.830611918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8cc478f5-z859x,Uid:c9216aea-9a46-4a8f-81a9-8d30cdf7722b,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:38.834369 kubelet[2755]: E0115 05:44:38.834241 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:38.834833 containerd[1603]: time="2026-01-15T05:44:38.834757683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wqfbb,Uid:5e1c0e06-5a78-43e6-a8cc-cfe663be0279,Namespace:kube-system,Attempt:0,}" Jan 15 05:44:38.844063 containerd[1603]: time="2026-01-15T05:44:38.843992526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c478c946b-q256m,Uid:2e111366-f223-4b0d-ad74-a8fd8f32e679,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:38.954199 containerd[1603]: time="2026-01-15T05:44:38.953789845Z" level=error msg="Failed to destroy network for sandbox \"dcce3ab3e01b946e0fa2adff59d0e8118ec5f01a524af26ab63ec1e59f57a2e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.967638 containerd[1603]: time="2026-01-15T05:44:38.967519650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wdqdl,Uid:37ca1d96-eda1-47b6-b5e5-48121189a9c0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcce3ab3e01b946e0fa2adff59d0e8118ec5f01a524af26ab63ec1e59f57a2e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.968269 kubelet[2755]: E0115 05:44:38.968204 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcce3ab3e01b946e0fa2adff59d0e8118ec5f01a524af26ab63ec1e59f57a2e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.968350 kubelet[2755]: E0115 05:44:38.968289 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcce3ab3e01b946e0fa2adff59d0e8118ec5f01a524af26ab63ec1e59f57a2e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wdqdl" Jan 15 05:44:38.968350 kubelet[2755]: E0115 05:44:38.968311 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcce3ab3e01b946e0fa2adff59d0e8118ec5f01a524af26ab63ec1e59f57a2e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wdqdl" Jan 15 05:44:38.968520 kubelet[2755]: E0115 05:44:38.968350 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wdqdl_kube-system(37ca1d96-eda1-47b6-b5e5-48121189a9c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wdqdl_kube-system(37ca1d96-eda1-47b6-b5e5-48121189a9c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcce3ab3e01b946e0fa2adff59d0e8118ec5f01a524af26ab63ec1e59f57a2e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wdqdl" podUID="37ca1d96-eda1-47b6-b5e5-48121189a9c0" Jan 15 05:44:38.980460 containerd[1603]: time="2026-01-15T05:44:38.980306446Z" level=error msg="Failed to destroy network for sandbox \"f9039843bc963b5acb40806a6f3b7474e507e83dd45b79e3dc972994231c5a98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.992157 containerd[1603]: time="2026-01-15T05:44:38.992124268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766d8cc98b-vvx8x,Uid:bad082bb-9731-4bff-b64e-9964fb68119a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9039843bc963b5acb40806a6f3b7474e507e83dd45b79e3dc972994231c5a98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.992724 containerd[1603]: time="2026-01-15T05:44:38.992583924Z" level=error msg="Failed to destroy network for sandbox \"05e6a08737746ec3323da293357652b5c09af8950587c54878c73aad13e5ff59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.992774 kubelet[2755]: E0115 05:44:38.992662 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9039843bc963b5acb40806a6f3b7474e507e83dd45b79e3dc972994231c5a98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:38.992774 kubelet[2755]: E0115 05:44:38.992741 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9039843bc963b5acb40806a6f3b7474e507e83dd45b79e3dc972994231c5a98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" Jan 15 05:44:38.992774 kubelet[2755]: E0115 05:44:38.992761 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9039843bc963b5acb40806a6f3b7474e507e83dd45b79e3dc972994231c5a98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" Jan 15 05:44:38.992934 kubelet[2755]: E0115 05:44:38.992797 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-766d8cc98b-vvx8x_calico-apiserver(bad082bb-9731-4bff-b64e-9964fb68119a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-766d8cc98b-vvx8x_calico-apiserver(bad082bb-9731-4bff-b64e-9964fb68119a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9039843bc963b5acb40806a6f3b7474e507e83dd45b79e3dc972994231c5a98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:44:39.000837 containerd[1603]: time="2026-01-15T05:44:39.000722721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766d8cc98b-k6vdp,Uid:1c7a473d-fbbd-41be-961a-cb9f606fd6ff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e6a08737746ec3323da293357652b5c09af8950587c54878c73aad13e5ff59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.001252 kubelet[2755]: E0115 05:44:39.000963 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e6a08737746ec3323da293357652b5c09af8950587c54878c73aad13e5ff59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.001252 kubelet[2755]: E0115 05:44:39.001046 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e6a08737746ec3323da293357652b5c09af8950587c54878c73aad13e5ff59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" Jan 15 05:44:39.001252 kubelet[2755]: E0115 05:44:39.001066 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05e6a08737746ec3323da293357652b5c09af8950587c54878c73aad13e5ff59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" Jan 15 05:44:39.001355 kubelet[2755]: E0115 05:44:39.001095 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-766d8cc98b-k6vdp_calico-apiserver(1c7a473d-fbbd-41be-961a-cb9f606fd6ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-766d8cc98b-k6vdp_calico-apiserver(1c7a473d-fbbd-41be-961a-cb9f606fd6ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05e6a08737746ec3323da293357652b5c09af8950587c54878c73aad13e5ff59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:44:39.003477 containerd[1603]: time="2026-01-15T05:44:39.003345743Z" level=error msg="Failed to destroy network for sandbox \"b8f0ac36eb1e93c1fc78eb2596f99336f945a0aa2cd39a97427c500006aa14ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.008812 containerd[1603]: time="2026-01-15T05:44:39.008698219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q55xd,Uid:647d95db-9ea2-4c12-b24e-24a6a7b2ddc1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f0ac36eb1e93c1fc78eb2596f99336f945a0aa2cd39a97427c500006aa14ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.008934 kubelet[2755]: E0115 05:44:39.008891 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f0ac36eb1e93c1fc78eb2596f99336f945a0aa2cd39a97427c500006aa14ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.008976 kubelet[2755]: E0115 05:44:39.008929 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f0ac36eb1e93c1fc78eb2596f99336f945a0aa2cd39a97427c500006aa14ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-q55xd" Jan 15 05:44:39.008976 kubelet[2755]: E0115 05:44:39.008948 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f0ac36eb1e93c1fc78eb2596f99336f945a0aa2cd39a97427c500006aa14ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-q55xd" Jan 15 05:44:39.009091 kubelet[2755]: E0115 05:44:39.009008 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-q55xd_calico-system(647d95db-9ea2-4c12-b24e-24a6a7b2ddc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-q55xd_calico-system(647d95db-9ea2-4c12-b24e-24a6a7b2ddc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8f0ac36eb1e93c1fc78eb2596f99336f945a0aa2cd39a97427c500006aa14ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:44:39.018032 containerd[1603]: time="2026-01-15T05:44:39.017812139Z" level=error msg="Failed to destroy network for sandbox \"f4b2950ddc24a1680a293c77ee0cf6f18d049bc095e42bdfae8cbebcd458e78b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.027819 containerd[1603]: time="2026-01-15T05:44:39.027732221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8cc478f5-z859x,Uid:c9216aea-9a46-4a8f-81a9-8d30cdf7722b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4b2950ddc24a1680a293c77ee0cf6f18d049bc095e42bdfae8cbebcd458e78b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.029001 kubelet[2755]: E0115 05:44:39.028132 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4b2950ddc24a1680a293c77ee0cf6f18d049bc095e42bdfae8cbebcd458e78b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.029001 kubelet[2755]: E0115 05:44:39.028193 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4b2950ddc24a1680a293c77ee0cf6f18d049bc095e42bdfae8cbebcd458e78b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" Jan 15 05:44:39.029001 kubelet[2755]: E0115 05:44:39.028220 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4b2950ddc24a1680a293c77ee0cf6f18d049bc095e42bdfae8cbebcd458e78b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" Jan 15 05:44:39.029172 kubelet[2755]: E0115 05:44:39.028274 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c8cc478f5-z859x_calico-system(c9216aea-9a46-4a8f-81a9-8d30cdf7722b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c8cc478f5-z859x_calico-system(c9216aea-9a46-4a8f-81a9-8d30cdf7722b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4b2950ddc24a1680a293c77ee0cf6f18d049bc095e42bdfae8cbebcd458e78b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:44:39.032057 containerd[1603]: time="2026-01-15T05:44:39.031859051Z" level=error msg="Failed to destroy network for sandbox \"45b9192846eccfc7b79adc445f7d242f54e712474622574d50d34a20e46a36f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.038141 containerd[1603]: time="2026-01-15T05:44:39.038009333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c478c946b-q256m,Uid:2e111366-f223-4b0d-ad74-a8fd8f32e679,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b9192846eccfc7b79adc445f7d242f54e712474622574d50d34a20e46a36f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.038715 kubelet[2755]: E0115 05:44:39.038578 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b9192846eccfc7b79adc445f7d242f54e712474622574d50d34a20e46a36f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.038715 kubelet[2755]: E0115 05:44:39.038629 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b9192846eccfc7b79adc445f7d242f54e712474622574d50d34a20e46a36f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c478c946b-q256m" Jan 15 05:44:39.038715 kubelet[2755]: E0115 05:44:39.038656 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45b9192846eccfc7b79adc445f7d242f54e712474622574d50d34a20e46a36f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c478c946b-q256m" Jan 15 05:44:39.038819 kubelet[2755]: E0115 05:44:39.038705 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c478c946b-q256m_calico-system(2e111366-f223-4b0d-ad74-a8fd8f32e679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c478c946b-q256m_calico-system(2e111366-f223-4b0d-ad74-a8fd8f32e679)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45b9192846eccfc7b79adc445f7d242f54e712474622574d50d34a20e46a36f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c478c946b-q256m" podUID="2e111366-f223-4b0d-ad74-a8fd8f32e679" Jan 15 05:44:39.045379 containerd[1603]: time="2026-01-15T05:44:39.045304769Z" level=error msg="Failed to destroy network for sandbox \"d1c914f6e4f5c539faf25a2b2464e3bd543a4794b794cf7c950031c0804c7e8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.053323 containerd[1603]: time="2026-01-15T05:44:39.053166535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wqfbb,Uid:5e1c0e06-5a78-43e6-a8cc-cfe663be0279,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c914f6e4f5c539faf25a2b2464e3bd543a4794b794cf7c950031c0804c7e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.053979 kubelet[2755]: E0115 05:44:39.053484 2755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c914f6e4f5c539faf25a2b2464e3bd543a4794b794cf7c950031c0804c7e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:44:39.053979 kubelet[2755]: E0115 05:44:39.053536 2755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c914f6e4f5c539faf25a2b2464e3bd543a4794b794cf7c950031c0804c7e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wqfbb" Jan 15 05:44:39.053979 kubelet[2755]: E0115 05:44:39.053561 2755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c914f6e4f5c539faf25a2b2464e3bd543a4794b794cf7c950031c0804c7e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wqfbb" Jan 15 05:44:39.054594 kubelet[2755]: E0115 05:44:39.053608 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wqfbb_kube-system(5e1c0e06-5a78-43e6-a8cc-cfe663be0279)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wqfbb_kube-system(5e1c0e06-5a78-43e6-a8cc-cfe663be0279)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1c914f6e4f5c539faf25a2b2464e3bd543a4794b794cf7c950031c0804c7e8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wqfbb" podUID="5e1c0e06-5a78-43e6-a8cc-cfe663be0279" Jan 15 05:44:39.360392 systemd[1]: run-netns-cni\x2d7557006a\x2dad6a\x2d78f4\x2d98a9\x2dec74b40fcda4.mount: Deactivated successfully. Jan 15 05:44:46.184822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201903205.mount: Deactivated successfully. Jan 15 05:44:46.407662 containerd[1603]: time="2026-01-15T05:44:46.407601780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:46.409112 containerd[1603]: time="2026-01-15T05:44:46.409076125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 15 05:44:46.411362 containerd[1603]: time="2026-01-15T05:44:46.411278837Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:46.432087 containerd[1603]: time="2026-01-15T05:44:46.431949723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:44:46.432737 containerd[1603]: time="2026-01-15T05:44:46.432631216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.770426549s" Jan 15 05:44:46.432737 containerd[1603]: time="2026-01-15T05:44:46.432694284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 15 05:44:46.464310 containerd[1603]: time="2026-01-15T05:44:46.452145687Z" level=info msg="CreateContainer within sandbox \"f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 05:44:46.494533 containerd[1603]: time="2026-01-15T05:44:46.494348846Z" level=info msg="Container 3c15a2a1fae8a4343fc838054dfa2a524e057dc1e7c07611afc8aa49dd9fa3f4: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:46.509769 containerd[1603]: time="2026-01-15T05:44:46.509647544Z" level=info msg="CreateContainer within sandbox \"f1a8748f286d602448d3dbbc3c63aa882328037baf3ce39efd84a1311e56f8c2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3c15a2a1fae8a4343fc838054dfa2a524e057dc1e7c07611afc8aa49dd9fa3f4\"" Jan 15 05:44:46.510821 containerd[1603]: time="2026-01-15T05:44:46.510796963Z" level=info msg="StartContainer for \"3c15a2a1fae8a4343fc838054dfa2a524e057dc1e7c07611afc8aa49dd9fa3f4\"" Jan 15 05:44:46.513091 containerd[1603]: time="2026-01-15T05:44:46.512981838Z" level=info msg="connecting to shim 3c15a2a1fae8a4343fc838054dfa2a524e057dc1e7c07611afc8aa49dd9fa3f4" address="unix:///run/containerd/s/be27b2dda89ee2a879bd5b2648e05ba54ebf7411721c707411c6a7e1648843df" protocol=ttrpc version=3 Jan 15 05:44:46.549831 systemd[1]: Started cri-containerd-3c15a2a1fae8a4343fc838054dfa2a524e057dc1e7c07611afc8aa49dd9fa3f4.scope - libcontainer container 3c15a2a1fae8a4343fc838054dfa2a524e057dc1e7c07611afc8aa49dd9fa3f4. Jan 15 05:44:46.639000 audit: BPF prog-id=172 op=LOAD Jan 15 05:44:46.643607 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 15 05:44:46.643727 kernel: audit: type=1334 audit(1768455886.639:565): prog-id=172 op=LOAD Jan 15 05:44:46.639000 audit[3836]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3288 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:46.668317 kernel: audit: type=1300 audit(1768455886.639:565): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3288 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:46.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313561326131666165386134333433666338333830353464666132 Jan 15 05:44:46.689603 kernel: audit: type=1327 audit(1768455886.639:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313561326131666165386134333433666338333830353464666132 Jan 15 05:44:46.639000 audit: BPF prog-id=173 op=LOAD Jan 15 05:44:46.695355 kernel: audit: type=1334 audit(1768455886.639:566): prog-id=173 op=LOAD Jan 15 05:44:46.695607 kernel: audit: type=1300 audit(1768455886.639:566): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3288 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:46.639000 audit[3836]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3288 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:46.714860 kernel: audit: type=1327 audit(1768455886.639:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313561326131666165386134333433666338333830353464666132 Jan 15 05:44:46.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313561326131666165386134333433666338333830353464666132 Jan 15 05:44:46.640000 audit: BPF prog-id=173 op=UNLOAD Jan 15 05:44:46.734835 kernel: audit: type=1334 audit(1768455886.640:567): prog-id=173 op=UNLOAD Jan 15 05:44:46.735143 kernel: audit: type=1300 audit(1768455886.640:567): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:46.640000 audit[3836]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313561326131666165386134333433666338333830353464666132 Jan 15 05:44:46.760800 containerd[1603]: time="2026-01-15T05:44:46.760697416Z" level=info msg="StartContainer for \"3c15a2a1fae8a4343fc838054dfa2a524e057dc1e7c07611afc8aa49dd9fa3f4\" returns successfully" Jan 15 05:44:46.768292 kernel: audit: type=1327 audit(1768455886.640:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313561326131666165386134333433666338333830353464666132 Jan 15 05:44:46.640000 audit: BPF prog-id=172 op=UNLOAD Jan 15 05:44:46.640000 audit[3836]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313561326131666165386134333433666338333830353464666132 Jan 15 05:44:46.640000 audit: BPF prog-id=174 op=LOAD Jan 15 05:44:46.640000 audit[3836]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3288 pid=3836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363313561326131666165386134333433666338333830353464666132 Jan 15 05:44:46.773742 kernel: audit: type=1334 audit(1768455886.640:568): prog-id=172 op=UNLOAD Jan 15 05:44:46.892550 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 05:44:46.893014 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 05:44:47.146982 kubelet[2755]: I0115 05:44:47.146908 2755 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzd5\" (UniqueName: \"kubernetes.io/projected/2e111366-f223-4b0d-ad74-a8fd8f32e679-kube-api-access-7gzd5\") pod \"2e111366-f223-4b0d-ad74-a8fd8f32e679\" (UID: \"2e111366-f223-4b0d-ad74-a8fd8f32e679\") " Jan 15 05:44:47.146982 kubelet[2755]: I0115 05:44:47.146980 2755 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e111366-f223-4b0d-ad74-a8fd8f32e679-whisker-backend-key-pair\") pod \"2e111366-f223-4b0d-ad74-a8fd8f32e679\" (UID: \"2e111366-f223-4b0d-ad74-a8fd8f32e679\") " Jan 15 05:44:47.148287 kubelet[2755]: I0115 05:44:47.147001 2755 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e111366-f223-4b0d-ad74-a8fd8f32e679-whisker-ca-bundle\") pod \"2e111366-f223-4b0d-ad74-a8fd8f32e679\" (UID: \"2e111366-f223-4b0d-ad74-a8fd8f32e679\") " Jan 15 05:44:47.148974 kubelet[2755]: I0115 05:44:47.148899 2755 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e111366-f223-4b0d-ad74-a8fd8f32e679-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2e111366-f223-4b0d-ad74-a8fd8f32e679" (UID: "2e111366-f223-4b0d-ad74-a8fd8f32e679"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 05:44:47.155728 kubelet[2755]: I0115 05:44:47.155294 2755 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e111366-f223-4b0d-ad74-a8fd8f32e679-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2e111366-f223-4b0d-ad74-a8fd8f32e679" (UID: "2e111366-f223-4b0d-ad74-a8fd8f32e679"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 05:44:47.158161 kubelet[2755]: I0115 05:44:47.157978 2755 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e111366-f223-4b0d-ad74-a8fd8f32e679-kube-api-access-7gzd5" (OuterVolumeSpecName: "kube-api-access-7gzd5") pod "2e111366-f223-4b0d-ad74-a8fd8f32e679" (UID: "2e111366-f223-4b0d-ad74-a8fd8f32e679"). InnerVolumeSpecName "kube-api-access-7gzd5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 05:44:47.189240 systemd[1]: var-lib-kubelet-pods-2e111366\x2df223\x2d4b0d\x2dad74\x2da8fd8f32e679-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7gzd5.mount: Deactivated successfully. Jan 15 05:44:47.189376 systemd[1]: var-lib-kubelet-pods-2e111366\x2df223\x2d4b0d\x2dad74\x2da8fd8f32e679-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 15 05:44:47.248271 kubelet[2755]: I0115 05:44:47.248075 2755 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7gzd5\" (UniqueName: \"kubernetes.io/projected/2e111366-f223-4b0d-ad74-a8fd8f32e679-kube-api-access-7gzd5\") on node \"localhost\" DevicePath \"\"" Jan 15 05:44:47.248271 kubelet[2755]: I0115 05:44:47.248100 2755 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e111366-f223-4b0d-ad74-a8fd8f32e679-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 15 05:44:47.248271 kubelet[2755]: I0115 05:44:47.248111 2755 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e111366-f223-4b0d-ad74-a8fd8f32e679-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 15 05:44:47.697063 kubelet[2755]: E0115 05:44:47.696867 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:47.707892 systemd[1]: Removed slice kubepods-besteffort-pod2e111366_f223_4b0d_ad74_a8fd8f32e679.slice - libcontainer container kubepods-besteffort-pod2e111366_f223_4b0d_ad74_a8fd8f32e679.slice. Jan 15 05:44:47.743826 kubelet[2755]: I0115 05:44:47.743648 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7z7pn" podStartSLOduration=1.9172417990000001 podStartE2EDuration="16.743631192s" podCreationTimestamp="2026-01-15 05:44:31 +0000 UTC" firstStartedPulling="2026-01-15 05:44:31.607730146 +0000 UTC m=+19.205772676" lastFinishedPulling="2026-01-15 05:44:46.434119538 +0000 UTC m=+34.032162069" observedRunningTime="2026-01-15 05:44:47.728059201 +0000 UTC m=+35.326101741" watchObservedRunningTime="2026-01-15 05:44:47.743631192 +0000 UTC m=+35.341673722" Jan 15 05:44:47.808720 systemd[1]: Created slice kubepods-besteffort-pod99bf8dba_e773_442c_a667_c161cf0a56cd.slice - libcontainer container kubepods-besteffort-pod99bf8dba_e773_442c_a667_c161cf0a56cd.slice. Jan 15 05:44:47.862159 kubelet[2755]: I0115 05:44:47.862014 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99bf8dba-e773-442c-a667-c161cf0a56cd-whisker-backend-key-pair\") pod \"whisker-66f8b465fb-vtc8s\" (UID: \"99bf8dba-e773-442c-a667-c161cf0a56cd\") " pod="calico-system/whisker-66f8b465fb-vtc8s" Jan 15 05:44:47.862159 kubelet[2755]: I0115 05:44:47.862111 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99bf8dba-e773-442c-a667-c161cf0a56cd-whisker-ca-bundle\") pod \"whisker-66f8b465fb-vtc8s\" (UID: \"99bf8dba-e773-442c-a667-c161cf0a56cd\") " pod="calico-system/whisker-66f8b465fb-vtc8s" Jan 15 05:44:47.862159 kubelet[2755]: I0115 05:44:47.862129 2755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2v5\" (UniqueName: \"kubernetes.io/projected/99bf8dba-e773-442c-a667-c161cf0a56cd-kube-api-access-rb2v5\") pod \"whisker-66f8b465fb-vtc8s\" (UID: \"99bf8dba-e773-442c-a667-c161cf0a56cd\") " pod="calico-system/whisker-66f8b465fb-vtc8s" Jan 15 05:44:48.116629 containerd[1603]: time="2026-01-15T05:44:48.116546667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f8b465fb-vtc8s,Uid:99bf8dba-e773-442c-a667-c161cf0a56cd,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:48.423059 systemd-networkd[1504]: calic81abce3b8a: Link UP Jan 15 05:44:48.424369 systemd-networkd[1504]: calic81abce3b8a: Gained carrier Jan 15 05:44:48.444253 containerd[1603]: 2026-01-15 05:44:48.158 [INFO][3903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 05:44:48.444253 containerd[1603]: 2026-01-15 05:44:48.193 [INFO][3903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66f8b465fb--vtc8s-eth0 whisker-66f8b465fb- calico-system 99bf8dba-e773-442c-a667-c161cf0a56cd 890 0 2026-01-15 05:44:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66f8b465fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66f8b465fb-vtc8s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic81abce3b8a [] [] }} ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Namespace="calico-system" Pod="whisker-66f8b465fb-vtc8s" WorkloadEndpoint="localhost-k8s-whisker--66f8b465fb--vtc8s-" Jan 15 05:44:48.444253 containerd[1603]: 2026-01-15 05:44:48.193 [INFO][3903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Namespace="calico-system" Pod="whisker-66f8b465fb-vtc8s" WorkloadEndpoint="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" Jan 15 05:44:48.444253 containerd[1603]: 2026-01-15 05:44:48.324 [INFO][3918] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" HandleID="k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Workload="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.325 [INFO][3918] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" HandleID="k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Workload="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000460e50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66f8b465fb-vtc8s", "timestamp":"2026-01-15 05:44:48.324580724 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.325 [INFO][3918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.326 [INFO][3918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.326 [INFO][3918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.339 [INFO][3918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" host="localhost" Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.353 [INFO][3918] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.363 [INFO][3918] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.366 [INFO][3918] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.373 [INFO][3918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:48.444895 containerd[1603]: 2026-01-15 05:44:48.373 [INFO][3918] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" host="localhost" Jan 15 05:44:48.445234 containerd[1603]: 2026-01-15 05:44:48.377 [INFO][3918] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69 Jan 15 05:44:48.445234 containerd[1603]: 2026-01-15 05:44:48.385 [INFO][3918] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" host="localhost" Jan 15 05:44:48.445234 containerd[1603]: 2026-01-15 05:44:48.396 [INFO][3918] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" host="localhost" Jan 15 05:44:48.445234 containerd[1603]: 2026-01-15 05:44:48.396 [INFO][3918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" host="localhost" Jan 15 05:44:48.445234 containerd[1603]: 2026-01-15 05:44:48.396 [INFO][3918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:44:48.445234 containerd[1603]: 2026-01-15 05:44:48.396 [INFO][3918] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" HandleID="k8s-pod-network.51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Workload="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" Jan 15 05:44:48.445398 containerd[1603]: 2026-01-15 05:44:48.401 [INFO][3903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Namespace="calico-system" Pod="whisker-66f8b465fb-vtc8s" WorkloadEndpoint="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66f8b465fb--vtc8s-eth0", GenerateName:"whisker-66f8b465fb-", Namespace:"calico-system", SelfLink:"", UID:"99bf8dba-e773-442c-a667-c161cf0a56cd", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f8b465fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66f8b465fb-vtc8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic81abce3b8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:48.445398 containerd[1603]: 2026-01-15 05:44:48.401 [INFO][3903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Namespace="calico-system" Pod="whisker-66f8b465fb-vtc8s" WorkloadEndpoint="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" Jan 15 05:44:48.445630 containerd[1603]: 2026-01-15 05:44:48.401 [INFO][3903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic81abce3b8a ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Namespace="calico-system" Pod="whisker-66f8b465fb-vtc8s" WorkloadEndpoint="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" Jan 15 05:44:48.445630 containerd[1603]: 2026-01-15 05:44:48.424 [INFO][3903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Namespace="calico-system" Pod="whisker-66f8b465fb-vtc8s" WorkloadEndpoint="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" Jan 15 05:44:48.445758 containerd[1603]: 2026-01-15 05:44:48.425 [INFO][3903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Namespace="calico-system" Pod="whisker-66f8b465fb-vtc8s" WorkloadEndpoint="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66f8b465fb--vtc8s-eth0", GenerateName:"whisker-66f8b465fb-", Namespace:"calico-system", SelfLink:"", UID:"99bf8dba-e773-442c-a667-c161cf0a56cd", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f8b465fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69", Pod:"whisker-66f8b465fb-vtc8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic81abce3b8a", MAC:"62:62:93:5d:20:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:48.445922 containerd[1603]: 2026-01-15 05:44:48.440 [INFO][3903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" Namespace="calico-system" Pod="whisker-66f8b465fb-vtc8s" WorkloadEndpoint="localhost-k8s-whisker--66f8b465fb--vtc8s-eth0" Jan 15 05:44:48.538627 kubelet[2755]: I0115 05:44:48.538273 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e111366-f223-4b0d-ad74-a8fd8f32e679" path="/var/lib/kubelet/pods/2e111366-f223-4b0d-ad74-a8fd8f32e679/volumes" Jan 15 05:44:48.555409 containerd[1603]: time="2026-01-15T05:44:48.555244370Z" level=info msg="connecting to shim 51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69" address="unix:///run/containerd/s/52a7f5c0160be15157e23dc03953f038199deaedeb273db3b27dff2f4e309edc" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:48.642802 systemd[1]: Started cri-containerd-51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69.scope - libcontainer container 51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69. Jan 15 05:44:48.681000 audit: BPF prog-id=175 op=LOAD Jan 15 05:44:48.685000 audit: BPF prog-id=176 op=LOAD Jan 15 05:44:48.685000 audit[4018]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=3981 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531616666643862346439313231386231306263646433313664633939 Jan 15 05:44:48.685000 audit: BPF prog-id=176 op=UNLOAD Jan 15 05:44:48.685000 audit[4018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3981 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531616666643862346439313231386231306263646433313664633939 Jan 15 05:44:48.686000 audit: BPF prog-id=177 op=LOAD Jan 15 05:44:48.686000 audit[4018]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=3981 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531616666643862346439313231386231306263646433313664633939 Jan 15 05:44:48.687000 audit: BPF prog-id=178 op=LOAD Jan 15 05:44:48.687000 audit[4018]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=3981 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531616666643862346439313231386231306263646433313664633939 Jan 15 05:44:48.687000 audit: BPF prog-id=178 op=UNLOAD Jan 15 05:44:48.687000 audit[4018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3981 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531616666643862346439313231386231306263646433313664633939 Jan 15 05:44:48.687000 audit: BPF prog-id=177 op=UNLOAD Jan 15 05:44:48.687000 audit[4018]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3981 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531616666643862346439313231386231306263646433313664633939 Jan 15 05:44:48.687000 audit: BPF prog-id=179 op=LOAD Jan 15 05:44:48.687000 audit[4018]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=3981 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531616666643862346439313231386231306263646433313664633939 Jan 15 05:44:48.690908 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:44:48.700197 kubelet[2755]: I0115 05:44:48.700159 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 05:44:48.701374 kubelet[2755]: E0115 05:44:48.700755 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:48.809054 containerd[1603]: time="2026-01-15T05:44:48.808783776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f8b465fb-vtc8s,Uid:99bf8dba-e773-442c-a667-c161cf0a56cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"51affd8b4d91218b10bcdd316dc999dde872e50f535a57b2551f22afec2a8e69\"" Jan 15 05:44:48.844707 containerd[1603]: time="2026-01-15T05:44:48.842918415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:44:48.920181 containerd[1603]: time="2026-01-15T05:44:48.919689037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:44:48.922132 containerd[1603]: time="2026-01-15T05:44:48.921930958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:44:48.922132 containerd[1603]: time="2026-01-15T05:44:48.922029151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:48.923580 kubelet[2755]: E0115 05:44:48.923386 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:44:48.923580 kubelet[2755]: E0115 05:44:48.923555 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:44:48.945736 kubelet[2755]: E0115 05:44:48.945570 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6de01969a6dc416d90e12cf9d6829f2f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rb2v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66f8b465fb-vtc8s_calico-system(99bf8dba-e773-442c-a667-c161cf0a56cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:44:48.951165 containerd[1603]: time="2026-01-15T05:44:48.951095097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:44:48.969000 audit: BPF prog-id=180 op=LOAD Jan 15 05:44:48.969000 audit[4109]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde84ccf90 a2=98 a3=1fffffffffffffff items=0 ppid=3946 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.969000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:44:48.970000 audit: BPF prog-id=180 op=UNLOAD Jan 15 05:44:48.970000 audit[4109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde84ccf60 a3=0 items=0 ppid=3946 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:44:48.971000 audit: BPF prog-id=181 op=LOAD Jan 15 05:44:48.971000 audit[4109]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde84cce70 a2=94 a3=3 items=0 ppid=3946 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.971000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:44:48.971000 audit: BPF prog-id=181 op=UNLOAD Jan 15 05:44:48.971000 audit[4109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffde84cce70 a2=94 a3=3 items=0 ppid=3946 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.971000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:44:48.971000 audit: BPF prog-id=182 op=LOAD Jan 15 05:44:48.971000 audit[4109]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde84cceb0 a2=94 a3=7ffde84cd090 items=0 ppid=3946 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.971000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:44:48.971000 audit: BPF prog-id=182 op=UNLOAD Jan 15 05:44:48.971000 audit[4109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffde84cceb0 a2=94 a3=7ffde84cd090 items=0 ppid=3946 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.971000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:44:48.977000 audit: BPF prog-id=183 op=LOAD Jan 15 05:44:48.977000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd09f5ef0 a2=98 a3=3 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:48.977000 audit: BPF prog-id=183 op=UNLOAD Jan 15 05:44:48.977000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd09f5ec0 a3=0 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:48.977000 audit: BPF prog-id=184 op=LOAD Jan 15 05:44:48.977000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd09f5ce0 a2=94 a3=54428f items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:48.977000 audit: BPF prog-id=184 op=UNLOAD Jan 15 05:44:48.977000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd09f5ce0 a2=94 a3=54428f items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:48.977000 audit: BPF prog-id=185 op=LOAD Jan 15 05:44:48.977000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd09f5d10 a2=94 a3=2 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:48.977000 audit: BPF prog-id=185 op=UNLOAD Jan 15 05:44:48.977000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd09f5d10 a2=0 a3=2 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:48.977000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.012871 containerd[1603]: time="2026-01-15T05:44:49.012828072Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:44:49.018944 containerd[1603]: time="2026-01-15T05:44:49.018809368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:44:49.018944 containerd[1603]: time="2026-01-15T05:44:49.018903554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:49.019689 kubelet[2755]: E0115 05:44:49.019277 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:44:49.019689 kubelet[2755]: E0115 05:44:49.019336 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:44:49.022591 kubelet[2755]: E0115 05:44:49.022365 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb2v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66f8b465fb-vtc8s_calico-system(99bf8dba-e773-442c-a667-c161cf0a56cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:44:49.023980 kubelet[2755]: E0115 05:44:49.023831 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:44:49.208000 audit: BPF prog-id=186 op=LOAD Jan 15 05:44:49.208000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcd09f5bd0 a2=94 a3=1 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.208000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.208000 audit: BPF prog-id=186 op=UNLOAD Jan 15 05:44:49.208000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcd09f5bd0 a2=94 a3=1 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.208000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.221000 audit: BPF prog-id=187 op=LOAD Jan 15 05:44:49.221000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd09f5bc0 a2=94 a3=4 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.221000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.221000 audit: BPF prog-id=187 op=UNLOAD Jan 15 05:44:49.221000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcd09f5bc0 a2=0 a3=4 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.221000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.221000 audit: BPF prog-id=188 op=LOAD Jan 15 05:44:49.221000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcd09f5a20 a2=94 a3=5 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.221000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.221000 audit: BPF prog-id=188 op=UNLOAD Jan 15 05:44:49.221000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcd09f5a20 a2=0 a3=5 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.221000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.222000 audit: BPF prog-id=189 op=LOAD Jan 15 05:44:49.222000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd09f5c40 a2=94 a3=6 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.222000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.222000 audit: BPF prog-id=189 op=UNLOAD Jan 15 05:44:49.222000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcd09f5c40 a2=0 a3=6 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.222000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.222000 audit: BPF prog-id=190 op=LOAD Jan 15 05:44:49.222000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcd09f53f0 a2=94 a3=88 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.222000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.222000 audit: BPF prog-id=191 op=LOAD Jan 15 05:44:49.222000 audit[4110]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcd09f5270 a2=94 a3=2 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.222000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.222000 audit: BPF prog-id=191 op=UNLOAD Jan 15 05:44:49.222000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcd09f52a0 a2=0 a3=7ffcd09f53a0 items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.222000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.223000 audit: BPF prog-id=190 op=UNLOAD Jan 15 05:44:49.223000 audit[4110]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1c67fd10 a2=0 a3=526df839fbdd578d items=0 ppid=3946 pid=4110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.223000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:44:49.236000 audit: BPF prog-id=192 op=LOAD Jan 15 05:44:49.236000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd422535f0 a2=98 a3=1999999999999999 items=0 ppid=3946 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.236000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:44:49.236000 audit: BPF prog-id=192 op=UNLOAD Jan 15 05:44:49.236000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd422535c0 a3=0 items=0 ppid=3946 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.236000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:44:49.236000 audit: BPF prog-id=193 op=LOAD Jan 15 05:44:49.236000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd422534d0 a2=94 a3=ffff items=0 ppid=3946 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.236000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:44:49.236000 audit: BPF prog-id=193 op=UNLOAD Jan 15 05:44:49.236000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd422534d0 a2=94 a3=ffff items=0 ppid=3946 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.236000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:44:49.236000 audit: BPF prog-id=194 op=LOAD Jan 15 05:44:49.236000 audit[4113]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd42253510 a2=94 a3=7ffd422536f0 items=0 ppid=3946 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.236000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:44:49.236000 audit: BPF prog-id=194 op=UNLOAD Jan 15 05:44:49.236000 audit[4113]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd42253510 a2=94 a3=7ffd422536f0 items=0 ppid=3946 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.236000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:44:49.335258 systemd-networkd[1504]: vxlan.calico: Link UP Jan 15 05:44:49.335274 systemd-networkd[1504]: vxlan.calico: Gained carrier Jan 15 05:44:49.377000 audit: BPF prog-id=195 op=LOAD Jan 15 05:44:49.377000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd194fb930 a2=98 a3=0 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.377000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.380000 audit: BPF prog-id=195 op=UNLOAD Jan 15 05:44:49.380000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd194fb900 a3=0 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.380000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.380000 audit: BPF prog-id=196 op=LOAD Jan 15 05:44:49.380000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd194fb740 a2=94 a3=54428f items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.380000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.380000 audit: BPF prog-id=196 op=UNLOAD Jan 15 05:44:49.380000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd194fb740 a2=94 a3=54428f items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.380000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.381000 audit: BPF prog-id=197 op=LOAD Jan 15 05:44:49.381000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd194fb770 a2=94 a3=2 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.381000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.381000 audit: BPF prog-id=197 op=UNLOAD Jan 15 05:44:49.381000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd194fb770 a2=0 a3=2 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.381000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.381000 audit: BPF prog-id=198 op=LOAD Jan 15 05:44:49.381000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd194fb520 a2=94 a3=4 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.381000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.381000 audit: BPF prog-id=198 op=UNLOAD Jan 15 05:44:49.381000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd194fb520 a2=94 a3=4 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.381000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.381000 audit: BPF prog-id=199 op=LOAD Jan 15 05:44:49.381000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd194fb620 a2=94 a3=7ffd194fb7a0 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.381000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.381000 audit: BPF prog-id=199 op=UNLOAD Jan 15 05:44:49.381000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd194fb620 a2=0 a3=7ffd194fb7a0 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.381000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.384000 audit: BPF prog-id=200 op=LOAD Jan 15 05:44:49.384000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd194fad50 a2=94 a3=2 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.384000 audit: BPF prog-id=200 op=UNLOAD Jan 15 05:44:49.384000 audit[4137]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd194fad50 a2=0 a3=2 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.384000 audit: BPF prog-id=201 op=LOAD Jan 15 05:44:49.384000 audit[4137]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd194fae50 a2=94 a3=30 items=0 ppid=3946 pid=4137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.384000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:44:49.399000 audit: BPF prog-id=202 op=LOAD Jan 15 05:44:49.399000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce78229c0 a2=98 a3=0 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.399000 audit: BPF prog-id=202 op=UNLOAD Jan 15 05:44:49.399000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffce7822990 a3=0 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.399000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.400000 audit: BPF prog-id=203 op=LOAD Jan 15 05:44:49.400000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce78227b0 a2=94 a3=54428f items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.400000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.400000 audit: BPF prog-id=203 op=UNLOAD Jan 15 05:44:49.400000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce78227b0 a2=94 a3=54428f items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.400000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.400000 audit: BPF prog-id=204 op=LOAD Jan 15 05:44:49.400000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce78227e0 a2=94 a3=2 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.400000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.401000 audit: BPF prog-id=204 op=UNLOAD Jan 15 05:44:49.401000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce78227e0 a2=0 a3=2 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.401000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.629000 audit: BPF prog-id=205 op=LOAD Jan 15 05:44:49.629000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffce78226a0 a2=94 a3=1 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.629000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.630000 audit: BPF prog-id=205 op=UNLOAD Jan 15 05:44:49.630000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffce78226a0 a2=94 a3=1 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.630000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.640000 audit: BPF prog-id=206 op=LOAD Jan 15 05:44:49.640000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce7822690 a2=94 a3=4 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.640000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.640000 audit: BPF prog-id=206 op=UNLOAD Jan 15 05:44:49.640000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffce7822690 a2=0 a3=4 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.640000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.640000 audit: BPF prog-id=207 op=LOAD Jan 15 05:44:49.640000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce78224f0 a2=94 a3=5 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.640000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.640000 audit: BPF prog-id=207 op=UNLOAD Jan 15 05:44:49.640000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce78224f0 a2=0 a3=5 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.640000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.640000 audit: BPF prog-id=208 op=LOAD Jan 15 05:44:49.640000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce7822710 a2=94 a3=6 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.640000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.640000 audit: BPF prog-id=208 op=UNLOAD Jan 15 05:44:49.640000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffce7822710 a2=0 a3=6 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.640000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.641000 audit: BPF prog-id=209 op=LOAD Jan 15 05:44:49.641000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffce7821ec0 a2=94 a3=88 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.641000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.641000 audit: BPF prog-id=210 op=LOAD Jan 15 05:44:49.641000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffce7821d40 a2=94 a3=2 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.641000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.641000 audit: BPF prog-id=210 op=UNLOAD Jan 15 05:44:49.641000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffce7821d70 a2=0 a3=7ffce7821e70 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.641000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.642000 audit: BPF prog-id=209 op=UNLOAD Jan 15 05:44:49.642000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=358e7d10 a2=0 a3=8e752e38faf75691 items=0 ppid=3946 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.642000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:44:49.661000 audit: BPF prog-id=201 op=UNLOAD Jan 15 05:44:49.661000 audit[3946]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000fbb2c0 a2=0 a3=0 items=0 ppid=3933 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.661000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 15 05:44:49.691696 systemd-networkd[1504]: calic81abce3b8a: Gained IPv6LL Jan 15 05:44:49.711171 kubelet[2755]: E0115 05:44:49.711105 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:44:49.750000 audit[4169]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:49.750000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcd25d2680 a2=0 a3=7ffcd25d266c items=0 ppid=2864 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:49.755000 audit[4169]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:49.755000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcd25d2680 a2=0 a3=0 items=0 ppid=2864 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:49.762000 audit[4174]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4174 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:49.762000 audit[4174]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc57a2cbc0 a2=0 a3=7ffc57a2cbac items=0 ppid=3946 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.762000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:49.763000 audit[4173]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:49.763000 audit[4173]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff19a53240 a2=0 a3=7fff19a5322c items=0 ppid=3946 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.763000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:49.778000 audit[4175]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4175 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:49.778000 audit[4175]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd9e782aa0 a2=0 a3=7ffd9e782a8c items=0 ppid=3946 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.778000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:49.780000 audit[4176]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4176 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:49.780000 audit[4176]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7fffa22991c0 a2=0 a3=7fffa22991ac items=0 ppid=3946 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:49.780000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:50.713014 kubelet[2755]: E0115 05:44:50.712931 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:44:51.035960 systemd-networkd[1504]: vxlan.calico: Gained IPv6LL Jan 15 05:44:52.522226 kubelet[2755]: E0115 05:44:52.522139 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:52.524847 kubelet[2755]: E0115 05:44:52.524050 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:52.524897 containerd[1603]: time="2026-01-15T05:44:52.522778670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766d8cc98b-vvx8x,Uid:bad082bb-9731-4bff-b64e-9964fb68119a,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:44:52.525610 containerd[1603]: time="2026-01-15T05:44:52.524755729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wqfbb,Uid:5e1c0e06-5a78-43e6-a8cc-cfe663be0279,Namespace:kube-system,Attempt:0,}" Jan 15 05:44:52.525610 containerd[1603]: time="2026-01-15T05:44:52.525210120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wdqdl,Uid:37ca1d96-eda1-47b6-b5e5-48121189a9c0,Namespace:kube-system,Attempt:0,}" Jan 15 05:44:52.527151 containerd[1603]: time="2026-01-15T05:44:52.526919767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8cc478f5-z859x,Uid:c9216aea-9a46-4a8f-81a9-8d30cdf7722b,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:52.870917 systemd-networkd[1504]: cali85324a7581c: Link UP Jan 15 05:44:52.872152 systemd-networkd[1504]: cali85324a7581c: Gained carrier Jan 15 05:44:52.909093 containerd[1603]: 2026-01-15 05:44:52.680 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0 calico-kube-controllers-c8cc478f5- calico-system c9216aea-9a46-4a8f-81a9-8d30cdf7722b 818 0 2026-01-15 05:44:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c8cc478f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c8cc478f5-z859x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali85324a7581c [] [] }} ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Namespace="calico-system" Pod="calico-kube-controllers-c8cc478f5-z859x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-" Jan 15 05:44:52.909093 containerd[1603]: 2026-01-15 05:44:52.681 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Namespace="calico-system" Pod="calico-kube-controllers-c8cc478f5-z859x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" Jan 15 05:44:52.909093 containerd[1603]: 2026-01-15 05:44:52.765 [INFO][4256] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" HandleID="k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Workload="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.766 [INFO][4256] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" HandleID="k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Workload="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c8cc478f5-z859x", "timestamp":"2026-01-15 05:44:52.765633748 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.766 [INFO][4256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.766 [INFO][4256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.766 [INFO][4256] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.778 [INFO][4256] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" host="localhost" Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.793 [INFO][4256] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.804 [INFO][4256] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.811 [INFO][4256] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.820 [INFO][4256] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:52.909413 containerd[1603]: 2026-01-15 05:44:52.823 [INFO][4256] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" host="localhost" Jan 15 05:44:52.909876 containerd[1603]: 2026-01-15 05:44:52.833 [INFO][4256] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a Jan 15 05:44:52.909876 containerd[1603]: 2026-01-15 05:44:52.841 [INFO][4256] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" host="localhost" Jan 15 05:44:52.909876 containerd[1603]: 2026-01-15 05:44:52.856 [INFO][4256] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" host="localhost" Jan 15 05:44:52.909876 containerd[1603]: 2026-01-15 05:44:52.857 [INFO][4256] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" host="localhost" Jan 15 05:44:52.909876 containerd[1603]: 2026-01-15 05:44:52.857 [INFO][4256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:44:52.909876 containerd[1603]: 2026-01-15 05:44:52.857 [INFO][4256] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" HandleID="k8s-pod-network.b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Workload="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" Jan 15 05:44:52.910031 containerd[1603]: 2026-01-15 05:44:52.867 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Namespace="calico-system" Pod="calico-kube-controllers-c8cc478f5-z859x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0", GenerateName:"calico-kube-controllers-c8cc478f5-", Namespace:"calico-system", SelfLink:"", UID:"c9216aea-9a46-4a8f-81a9-8d30cdf7722b", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8cc478f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c8cc478f5-z859x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85324a7581c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:52.910171 containerd[1603]: 2026-01-15 05:44:52.867 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Namespace="calico-system" Pod="calico-kube-controllers-c8cc478f5-z859x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" Jan 15 05:44:52.910171 containerd[1603]: 2026-01-15 05:44:52.867 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85324a7581c ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Namespace="calico-system" Pod="calico-kube-controllers-c8cc478f5-z859x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" Jan 15 05:44:52.910171 containerd[1603]: 2026-01-15 05:44:52.872 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Namespace="calico-system" Pod="calico-kube-controllers-c8cc478f5-z859x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" Jan 15 05:44:52.910256 containerd[1603]: 2026-01-15 05:44:52.873 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Namespace="calico-system" Pod="calico-kube-controllers-c8cc478f5-z859x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0", GenerateName:"calico-kube-controllers-c8cc478f5-", Namespace:"calico-system", SelfLink:"", UID:"c9216aea-9a46-4a8f-81a9-8d30cdf7722b", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c8cc478f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a", Pod:"calico-kube-controllers-c8cc478f5-z859x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85324a7581c", MAC:"a2:8c:00:79:d6:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:52.913196 containerd[1603]: 2026-01-15 05:44:52.895 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" Namespace="calico-system" Pod="calico-kube-controllers-c8cc478f5-z859x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c8cc478f5--z859x-eth0" Jan 15 05:44:52.974614 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 15 05:44:52.974746 kernel: audit: type=1325 audit(1768455892.959:646): table=filter:127 family=2 entries=36 op=nft_register_chain pid=4294 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:52.959000 audit[4294]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4294 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:52.959000 audit[4294]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffc7d807680 a2=0 a3=7ffc7d80766c items=0 ppid=3946 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:52.994570 kernel: audit: type=1300 audit(1768455892.959:646): arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffc7d807680 a2=0 a3=7ffc7d80766c items=0 ppid=3946 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.010618 kernel: audit: type=1327 audit(1768455892.959:646): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:52.959000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:53.026982 systemd-networkd[1504]: calib3b43148a3f: Link UP Jan 15 05:44:53.028022 containerd[1603]: time="2026-01-15T05:44:53.027847522Z" level=info msg="connecting to shim b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a" address="unix:///run/containerd/s/d13cac712f0e037b4f426b01ebbb3274188a53c8e19a39beb3cbd360c3f34865" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:53.029721 systemd-networkd[1504]: calib3b43148a3f: Gained carrier Jan 15 05:44:53.061651 containerd[1603]: 2026-01-15 05:44:52.665 [INFO][4188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0 calico-apiserver-766d8cc98b- calico-apiserver bad082bb-9731-4bff-b64e-9964fb68119a 809 0 2026-01-15 05:44:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:766d8cc98b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-766d8cc98b-vvx8x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib3b43148a3f [] [] }} ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-vvx8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-" Jan 15 05:44:53.061651 containerd[1603]: 2026-01-15 05:44:52.666 [INFO][4188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-vvx8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" Jan 15 05:44:53.061651 containerd[1603]: 2026-01-15 05:44:52.778 [INFO][4248] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" HandleID="k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Workload="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.780 [INFO][4248] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" HandleID="k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Workload="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000399db0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-766d8cc98b-vvx8x", "timestamp":"2026-01-15 05:44:52.778890627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.780 [INFO][4248] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.857 [INFO][4248] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.857 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.881 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" host="localhost" Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.921 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.944 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.953 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.958 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:53.062063 containerd[1603]: 2026-01-15 05:44:52.958 [INFO][4248] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" host="localhost" Jan 15 05:44:53.062642 containerd[1603]: 2026-01-15 05:44:52.966 [INFO][4248] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94 Jan 15 05:44:53.062642 containerd[1603]: 2026-01-15 05:44:52.973 [INFO][4248] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" host="localhost" Jan 15 05:44:53.062642 containerd[1603]: 2026-01-15 05:44:52.988 [INFO][4248] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" host="localhost" Jan 15 05:44:53.062642 containerd[1603]: 2026-01-15 05:44:52.988 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" host="localhost" Jan 15 05:44:53.062642 containerd[1603]: 2026-01-15 05:44:52.988 [INFO][4248] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:44:53.062642 containerd[1603]: 2026-01-15 05:44:52.988 [INFO][4248] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" HandleID="k8s-pod-network.aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Workload="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" Jan 15 05:44:53.062811 containerd[1603]: 2026-01-15 05:44:52.993 [INFO][4188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-vvx8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0", GenerateName:"calico-apiserver-766d8cc98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bad082bb-9731-4bff-b64e-9964fb68119a", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766d8cc98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-766d8cc98b-vvx8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3b43148a3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:53.062956 containerd[1603]: 2026-01-15 05:44:52.993 [INFO][4188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-vvx8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" Jan 15 05:44:53.062956 containerd[1603]: 2026-01-15 05:44:52.994 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3b43148a3f ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-vvx8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" Jan 15 05:44:53.062956 containerd[1603]: 2026-01-15 05:44:53.029 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-vvx8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" Jan 15 05:44:53.063054 containerd[1603]: 2026-01-15 05:44:53.030 [INFO][4188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-vvx8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0", GenerateName:"calico-apiserver-766d8cc98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bad082bb-9731-4bff-b64e-9964fb68119a", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766d8cc98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94", Pod:"calico-apiserver-766d8cc98b-vvx8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3b43148a3f", MAC:"e6:62:bb:37:7d:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:53.063243 containerd[1603]: 2026-01-15 05:44:53.055 [INFO][4188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-vvx8x" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--vvx8x-eth0" Jan 15 05:44:53.090000 audit[4335]: NETFILTER_CFG table=filter:128 family=2 entries=54 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:53.102814 kernel: audit: type=1325 audit(1768455893.090:647): table=filter:128 family=2 entries=54 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:53.090000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffc0a80c080 a2=0 a3=7ffc0a80c06c items=0 ppid=3946 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.138544 kernel: audit: type=1300 audit(1768455893.090:647): arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffc0a80c080 a2=0 a3=7ffc0a80c06c items=0 ppid=3946 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.105826 systemd[1]: Started cri-containerd-b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a.scope - libcontainer container b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a. Jan 15 05:44:53.125565 systemd-networkd[1504]: calieae99a66fd7: Link UP Jan 15 05:44:53.129863 systemd-networkd[1504]: calieae99a66fd7: Gained carrier Jan 15 05:44:53.090000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:53.160621 kernel: audit: type=1327 audit(1768455893.090:647): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:53.175268 containerd[1603]: 2026-01-15 05:44:52.682 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0 coredns-668d6bf9bc- kube-system 37ca1d96-eda1-47b6-b5e5-48121189a9c0 812 0 2026-01-15 05:44:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wdqdl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieae99a66fd7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdqdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wdqdl-" Jan 15 05:44:53.175268 containerd[1603]: 2026-01-15 05:44:52.682 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdqdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" Jan 15 05:44:53.175268 containerd[1603]: 2026-01-15 05:44:52.789 [INFO][4250] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" HandleID="k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Workload="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:52.789 [INFO][4250] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" HandleID="k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Workload="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000127b50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wdqdl", "timestamp":"2026-01-15 05:44:52.789736349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:52.790 [INFO][4250] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:52.988 [INFO][4250] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:52.988 [INFO][4250] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:53.024 [INFO][4250] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" host="localhost" Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:53.040 [INFO][4250] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:53.051 [INFO][4250] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:53.056 [INFO][4250] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:53.060 [INFO][4250] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:53.175724 containerd[1603]: 2026-01-15 05:44:53.060 [INFO][4250] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" host="localhost" Jan 15 05:44:53.176151 containerd[1603]: 2026-01-15 05:44:53.066 [INFO][4250] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb Jan 15 05:44:53.176151 containerd[1603]: 2026-01-15 05:44:53.073 [INFO][4250] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" host="localhost" Jan 15 05:44:53.176151 containerd[1603]: 2026-01-15 05:44:53.082 [INFO][4250] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" host="localhost" Jan 15 05:44:53.176151 containerd[1603]: 2026-01-15 05:44:53.083 [INFO][4250] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" host="localhost" Jan 15 05:44:53.176151 containerd[1603]: 2026-01-15 05:44:53.083 [INFO][4250] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:44:53.176151 containerd[1603]: 2026-01-15 05:44:53.083 [INFO][4250] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" HandleID="k8s-pod-network.7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Workload="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" Jan 15 05:44:53.176341 containerd[1603]: 2026-01-15 05:44:53.096 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdqdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"37ca1d96-eda1-47b6-b5e5-48121189a9c0", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wdqdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieae99a66fd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:53.176578 containerd[1603]: 2026-01-15 05:44:53.096 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdqdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" Jan 15 05:44:53.176578 containerd[1603]: 2026-01-15 05:44:53.096 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieae99a66fd7 ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdqdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" Jan 15 05:44:53.176578 containerd[1603]: 2026-01-15 05:44:53.139 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdqdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" Jan 15 05:44:53.176674 containerd[1603]: 2026-01-15 05:44:53.140 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdqdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"37ca1d96-eda1-47b6-b5e5-48121189a9c0", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb", Pod:"coredns-668d6bf9bc-wdqdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieae99a66fd7", MAC:"16:51:f2:fd:be:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:53.176674 containerd[1603]: 2026-01-15 05:44:53.168 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" Namespace="kube-system" Pod="coredns-668d6bf9bc-wdqdl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wdqdl-eth0" Jan 15 05:44:53.186799 containerd[1603]: time="2026-01-15T05:44:53.186611969Z" level=info msg="connecting to shim aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94" address="unix:///run/containerd/s/dcb92d9da4d8513e4279ab98226d88a686c63d98561b9eda6a0ac80693c0c6b8" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:53.188000 audit: BPF prog-id=211 op=LOAD Jan 15 05:44:53.199978 kernel: audit: type=1334 audit(1768455893.188:648): prog-id=211 op=LOAD Jan 15 05:44:53.200054 kernel: audit: type=1334 audit(1768455893.189:649): prog-id=212 op=LOAD Jan 15 05:44:53.189000 audit: BPF prog-id=212 op=LOAD Jan 15 05:44:53.199375 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:44:53.189000 audit[4318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4305 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.244915 kernel: audit: type=1300 audit(1768455893.189:649): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4305 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.245026 kernel: audit: type=1327 audit(1768455893.189:649): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303930646332373533616630616532366339323233373362313334 Jan 15 05:44:53.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303930646332373533616630616532366339323233373362313334 Jan 15 05:44:53.189000 audit: BPF prog-id=212 op=UNLOAD Jan 15 05:44:53.189000 audit[4318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4305 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303930646332373533616630616532366339323233373362313334 Jan 15 05:44:53.189000 audit: BPF prog-id=213 op=LOAD Jan 15 05:44:53.189000 audit[4318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4305 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303930646332373533616630616532366339323233373362313334 Jan 15 05:44:53.190000 audit: BPF prog-id=214 op=LOAD Jan 15 05:44:53.190000 audit[4318]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4305 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303930646332373533616630616532366339323233373362313334 Jan 15 05:44:53.191000 audit: BPF prog-id=214 op=UNLOAD Jan 15 05:44:53.191000 audit[4318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4305 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303930646332373533616630616532366339323233373362313334 Jan 15 05:44:53.191000 audit: BPF prog-id=213 op=UNLOAD Jan 15 05:44:53.191000 audit[4318]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4305 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303930646332373533616630616532366339323233373362313334 Jan 15 05:44:53.191000 audit: BPF prog-id=215 op=LOAD Jan 15 05:44:53.191000 audit[4318]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4305 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.191000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230303930646332373533616630616532366339323233373362313334 Jan 15 05:44:53.268000 audit[4390]: NETFILTER_CFG table=filter:129 family=2 entries=50 op=nft_register_chain pid=4390 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:53.268000 audit[4390]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffcdc67f640 a2=0 a3=7ffcdc67f62c items=0 ppid=3946 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.268000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:53.277888 systemd[1]: Started cri-containerd-aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94.scope - libcontainer container aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94. Jan 15 05:44:53.282072 systemd-networkd[1504]: cali94d18b17aca: Link UP Jan 15 05:44:53.283632 systemd-networkd[1504]: cali94d18b17aca: Gained carrier Jan 15 05:44:53.291785 containerd[1603]: time="2026-01-15T05:44:53.291728293Z" level=info msg="connecting to shim 7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb" address="unix:///run/containerd/s/7b9a9d7ab05cb6ced5e660f584e291eaa4be2bd9ed7d18e0e514dbce429612ca" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:52.729 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0 coredns-668d6bf9bc- kube-system 5e1c0e06-5a78-43e6-a8cc-cfe663be0279 819 0 2026-01-15 05:44:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wqfbb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali94d18b17aca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqfbb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqfbb-" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:52.729 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqfbb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:52.805 [INFO][4271] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" HandleID="k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Workload="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:52.806 [INFO][4271] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" HandleID="k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Workload="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000286770), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wqfbb", "timestamp":"2026-01-15 05:44:52.805835106 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:52.806 [INFO][4271] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.083 [INFO][4271] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.083 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.135 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.160 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.179 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.191 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.200 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.200 [INFO][4271] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.229 [INFO][4271] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.238 [INFO][4271] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.259 [INFO][4271] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.259 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" host="localhost" Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.259 [INFO][4271] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:44:53.349562 containerd[1603]: 2026-01-15 05:44:53.259 [INFO][4271] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" HandleID="k8s-pod-network.f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Workload="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" Jan 15 05:44:53.351652 containerd[1603]: 2026-01-15 05:44:53.265 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqfbb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e1c0e06-5a78-43e6-a8cc-cfe663be0279", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wqfbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94d18b17aca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:53.351652 containerd[1603]: 2026-01-15 05:44:53.265 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqfbb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" Jan 15 05:44:53.351652 containerd[1603]: 2026-01-15 05:44:53.267 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali94d18b17aca ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqfbb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" Jan 15 05:44:53.351652 containerd[1603]: 2026-01-15 05:44:53.286 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqfbb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" Jan 15 05:44:53.351652 containerd[1603]: 2026-01-15 05:44:53.293 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqfbb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e1c0e06-5a78-43e6-a8cc-cfe663be0279", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d", Pod:"coredns-668d6bf9bc-wqfbb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali94d18b17aca", MAC:"2a:37:c7:51:74:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:53.351652 containerd[1603]: 2026-01-15 05:44:53.338 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqfbb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqfbb-eth0" Jan 15 05:44:53.378000 audit: BPF prog-id=216 op=LOAD Jan 15 05:44:53.382000 audit: BPF prog-id=217 op=LOAD Jan 15 05:44:53.382000 audit[4377]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4356 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643436663063306631653764396432323130653936366535353639 Jan 15 05:44:53.382000 audit: BPF prog-id=217 op=UNLOAD Jan 15 05:44:53.382000 audit[4377]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4356 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643436663063306631653764396432323130653936366535353639 Jan 15 05:44:53.382000 audit: BPF prog-id=218 op=LOAD Jan 15 05:44:53.382000 audit[4377]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4356 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643436663063306631653764396432323130653936366535353639 Jan 15 05:44:53.382000 audit: BPF prog-id=219 op=LOAD Jan 15 05:44:53.382000 audit[4377]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4356 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643436663063306631653764396432323130653936366535353639 Jan 15 05:44:53.382000 audit: BPF prog-id=219 op=UNLOAD Jan 15 05:44:53.382000 audit[4377]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4356 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643436663063306631653764396432323130653936366535353639 Jan 15 05:44:53.382000 audit: BPF prog-id=218 op=UNLOAD Jan 15 05:44:53.382000 audit[4377]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4356 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643436663063306631653764396432323130653936366535353639 Jan 15 05:44:53.382000 audit: BPF prog-id=220 op=LOAD Jan 15 05:44:53.382000 audit[4377]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4356 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161643436663063306631653764396432323130653936366535353639 Jan 15 05:44:53.386666 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:44:53.397000 audit[4434]: NETFILTER_CFG table=filter:130 family=2 entries=44 op=nft_register_chain pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:53.397000 audit[4434]: SYSCALL arch=c000003e syscall=46 success=yes exit=21532 a0=3 a1=7ffdbf61e700 a2=0 a3=7ffdbf61e6ec items=0 ppid=3946 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.397000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:53.436027 containerd[1603]: time="2026-01-15T05:44:53.435961022Z" level=info msg="connecting to shim f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d" address="unix:///run/containerd/s/1bf567e7c4beda40045864404738fb8da5d117c912b70fde18f885b2964ab793" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:53.439918 systemd[1]: Started cri-containerd-7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb.scope - libcontainer container 7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb. Jan 15 05:44:53.471000 audit: BPF prog-id=221 op=LOAD Jan 15 05:44:53.473000 audit: BPF prog-id=222 op=LOAD Jan 15 05:44:53.473000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4401 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764316361343765396337323731356163336636383561623561363837 Jan 15 05:44:53.474000 audit: BPF prog-id=222 op=UNLOAD Jan 15 05:44:53.474000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764316361343765396337323731356163336636383561623561363837 Jan 15 05:44:53.474000 audit: BPF prog-id=223 op=LOAD Jan 15 05:44:53.474000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4401 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764316361343765396337323731356163336636383561623561363837 Jan 15 05:44:53.474000 audit: BPF prog-id=224 op=LOAD Jan 15 05:44:53.474000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4401 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764316361343765396337323731356163336636383561623561363837 Jan 15 05:44:53.474000 audit: BPF prog-id=224 op=UNLOAD Jan 15 05:44:53.474000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764316361343765396337323731356163336636383561623561363837 Jan 15 05:44:53.474000 audit: BPF prog-id=223 op=UNLOAD Jan 15 05:44:53.474000 audit[4421]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764316361343765396337323731356163336636383561623561363837 Jan 15 05:44:53.474000 audit: BPF prog-id=225 op=LOAD Jan 15 05:44:53.474000 audit[4421]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4401 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764316361343765396337323731356163336636383561623561363837 Jan 15 05:44:53.480402 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:44:53.480705 containerd[1603]: time="2026-01-15T05:44:53.480620257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c8cc478f5-z859x,Uid:c9216aea-9a46-4a8f-81a9-8d30cdf7722b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0090dc2753af0ae26c922373b134c5088527b3c324d8ef3bedb90fe1500ef2a\"" Jan 15 05:44:53.519960 systemd[1]: Started cri-containerd-f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d.scope - libcontainer container f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d. Jan 15 05:44:53.527109 containerd[1603]: time="2026-01-15T05:44:53.526943015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q55xd,Uid:647d95db-9ea2-4c12-b24e-24a6a7b2ddc1,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:53.527643 containerd[1603]: time="2026-01-15T05:44:53.527217087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzlc6,Uid:e9ba42b2-88e3-4065-bd62-0b6bb90b29e9,Namespace:calico-system,Attempt:0,}" Jan 15 05:44:53.527712 containerd[1603]: time="2026-01-15T05:44:53.527541502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766d8cc98b-k6vdp,Uid:1c7a473d-fbbd-41be-961a-cb9f606fd6ff,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:44:53.533239 containerd[1603]: time="2026-01-15T05:44:53.533155987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:44:53.571000 audit: BPF prog-id=226 op=LOAD Jan 15 05:44:53.574000 audit: BPF prog-id=227 op=LOAD Jan 15 05:44:53.574000 audit[4476]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4458 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316566373634636339663965383138373634333466393933633365 Jan 15 05:44:53.574000 audit: BPF prog-id=227 op=UNLOAD Jan 15 05:44:53.574000 audit[4476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316566373634636339663965383138373634333466393933633365 Jan 15 05:44:53.574000 audit: BPF prog-id=228 op=LOAD Jan 15 05:44:53.574000 audit[4476]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4458 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316566373634636339663965383138373634333466393933633365 Jan 15 05:44:53.574000 audit: BPF prog-id=229 op=LOAD Jan 15 05:44:53.574000 audit[4476]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4458 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316566373634636339663965383138373634333466393933633365 Jan 15 05:44:53.575000 audit: BPF prog-id=229 op=UNLOAD Jan 15 05:44:53.575000 audit[4476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316566373634636339663965383138373634333466393933633365 Jan 15 05:44:53.575000 audit: BPF prog-id=228 op=UNLOAD Jan 15 05:44:53.575000 audit[4476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316566373634636339663965383138373634333466393933633365 Jan 15 05:44:53.575000 audit: BPF prog-id=230 op=LOAD Jan 15 05:44:53.575000 audit[4476]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4458 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631316566373634636339663965383138373634333466393933633365 Jan 15 05:44:53.579633 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:44:53.631118 containerd[1603]: time="2026-01-15T05:44:53.630976715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766d8cc98b-vvx8x,Uid:bad082bb-9731-4bff-b64e-9964fb68119a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aad46f0c0f1e7d9d2210e966e5569cc74f8f4d290032fbf7de72a89621481e94\"" Jan 15 05:44:53.641669 containerd[1603]: time="2026-01-15T05:44:53.641539842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:44:53.663637 containerd[1603]: time="2026-01-15T05:44:53.662792425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:44:53.663637 containerd[1603]: time="2026-01-15T05:44:53.663205515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:53.663776 kubelet[2755]: E0115 05:44:53.663556 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:44:53.663776 kubelet[2755]: E0115 05:44:53.663605 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:44:53.664182 kubelet[2755]: E0115 05:44:53.663825 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgxmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c8cc478f5-z859x_calico-system(c9216aea-9a46-4a8f-81a9-8d30cdf7722b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:44:53.667570 kubelet[2755]: E0115 05:44:53.665335 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:44:53.667717 containerd[1603]: time="2026-01-15T05:44:53.665727877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:44:53.682945 containerd[1603]: time="2026-01-15T05:44:53.682235188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wdqdl,Uid:37ca1d96-eda1-47b6-b5e5-48121189a9c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb\"" Jan 15 05:44:53.687254 kubelet[2755]: E0115 05:44:53.686918 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:53.703159 containerd[1603]: time="2026-01-15T05:44:53.702977254Z" level=info msg="CreateContainer within sandbox \"7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 05:44:53.753565 containerd[1603]: time="2026-01-15T05:44:53.753519043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wqfbb,Uid:5e1c0e06-5a78-43e6-a8cc-cfe663be0279,Namespace:kube-system,Attempt:0,} returns sandbox id \"f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d\"" Jan 15 05:44:53.758015 containerd[1603]: time="2026-01-15T05:44:53.756890293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:44:53.760409 kubelet[2755]: E0115 05:44:53.760178 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:53.762299 containerd[1603]: time="2026-01-15T05:44:53.761958508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:44:53.762299 containerd[1603]: time="2026-01-15T05:44:53.762262857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:53.778607 containerd[1603]: time="2026-01-15T05:44:53.768945985Z" level=info msg="CreateContainer within sandbox \"f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 05:44:53.778724 kubelet[2755]: E0115 05:44:53.762676 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:44:53.778724 kubelet[2755]: E0115 05:44:53.762708 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:44:53.778724 kubelet[2755]: E0115 05:44:53.762822 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x6cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766d8cc98b-vvx8x_calico-apiserver(bad082bb-9731-4bff-b64e-9964fb68119a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:44:53.778724 kubelet[2755]: E0115 05:44:53.765216 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:44:53.786531 kubelet[2755]: E0115 05:44:53.778951 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:44:53.786753 containerd[1603]: time="2026-01-15T05:44:53.783106492Z" level=info msg="Container 885c03ae18ff793711d1dea8d6a92aa53f2934aca79bbcdab937b366c9f0c54e: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:53.799956 containerd[1603]: time="2026-01-15T05:44:53.799034762Z" level=info msg="Container cd13ce53d7adca4563bb760aebb5bbffcad71ec7cfbf50397c8be88d52696137: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:44:53.801635 containerd[1603]: time="2026-01-15T05:44:53.801560129Z" level=info msg="CreateContainer within sandbox \"7d1ca47e9c72715ac3f685ab5a687ecfa728dabbd111d595994d5c7c8eeabbfb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"885c03ae18ff793711d1dea8d6a92aa53f2934aca79bbcdab937b366c9f0c54e\"" Jan 15 05:44:53.802255 containerd[1603]: time="2026-01-15T05:44:53.802216264Z" level=info msg="StartContainer for \"885c03ae18ff793711d1dea8d6a92aa53f2934aca79bbcdab937b366c9f0c54e\"" Jan 15 05:44:53.804587 containerd[1603]: time="2026-01-15T05:44:53.804287056Z" level=info msg="connecting to shim 885c03ae18ff793711d1dea8d6a92aa53f2934aca79bbcdab937b366c9f0c54e" address="unix:///run/containerd/s/7b9a9d7ab05cb6ced5e660f584e291eaa4be2bd9ed7d18e0e514dbce429612ca" protocol=ttrpc version=3 Jan 15 05:44:53.854351 containerd[1603]: time="2026-01-15T05:44:53.853856352Z" level=info msg="CreateContainer within sandbox \"f11ef764cc9f9e81876434f993c3eadc3ec166ac0b8a77eb80499038d4d09e6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd13ce53d7adca4563bb760aebb5bbffcad71ec7cfbf50397c8be88d52696137\"" Jan 15 05:44:53.866590 containerd[1603]: time="2026-01-15T05:44:53.864163483Z" level=info msg="StartContainer for \"cd13ce53d7adca4563bb760aebb5bbffcad71ec7cfbf50397c8be88d52696137\"" Jan 15 05:44:53.873555 containerd[1603]: time="2026-01-15T05:44:53.872602477Z" level=info msg="connecting to shim cd13ce53d7adca4563bb760aebb5bbffcad71ec7cfbf50397c8be88d52696137" address="unix:///run/containerd/s/1bf567e7c4beda40045864404738fb8da5d117c912b70fde18f885b2964ab793" protocol=ttrpc version=3 Jan 15 05:44:53.891206 systemd[1]: Started cri-containerd-885c03ae18ff793711d1dea8d6a92aa53f2934aca79bbcdab937b366c9f0c54e.scope - libcontainer container 885c03ae18ff793711d1dea8d6a92aa53f2934aca79bbcdab937b366c9f0c54e. Jan 15 05:44:53.971000 audit: BPF prog-id=231 op=LOAD Jan 15 05:44:53.973000 audit: BPF prog-id=232 op=LOAD Jan 15 05:44:53.973000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4401 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838356330336165313866663739333731316431646561386436613932 Jan 15 05:44:53.973000 audit: BPF prog-id=232 op=UNLOAD Jan 15 05:44:53.973000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838356330336165313866663739333731316431646561386436613932 Jan 15 05:44:53.981000 audit: BPF prog-id=233 op=LOAD Jan 15 05:44:53.981000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4401 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838356330336165313866663739333731316431646561386436613932 Jan 15 05:44:53.982000 audit: BPF prog-id=234 op=LOAD Jan 15 05:44:53.982000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4401 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838356330336165313866663739333731316431646561386436613932 Jan 15 05:44:53.982000 audit: BPF prog-id=234 op=UNLOAD Jan 15 05:44:53.982000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838356330336165313866663739333731316431646561386436613932 Jan 15 05:44:53.982000 audit: BPF prog-id=233 op=UNLOAD Jan 15 05:44:53.982000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4401 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838356330336165313866663739333731316431646561386436613932 Jan 15 05:44:53.982000 audit: BPF prog-id=235 op=LOAD Jan 15 05:44:53.982000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4401 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:53.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838356330336165313866663739333731316431646561386436613932 Jan 15 05:44:53.987369 systemd[1]: Started cri-containerd-cd13ce53d7adca4563bb760aebb5bbffcad71ec7cfbf50397c8be88d52696137.scope - libcontainer container cd13ce53d7adca4563bb760aebb5bbffcad71ec7cfbf50397c8be88d52696137. Jan 15 05:44:54.035000 audit: BPF prog-id=236 op=LOAD Jan 15 05:44:54.037000 audit: BPF prog-id=237 op=LOAD Jan 15 05:44:54.037000 audit[4578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4458 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364313363653533643761646361343536336262373630616562623562 Jan 15 05:44:54.037000 audit: BPF prog-id=237 op=UNLOAD Jan 15 05:44:54.037000 audit[4578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364313363653533643761646361343536336262373630616562623562 Jan 15 05:44:54.038000 audit: BPF prog-id=238 op=LOAD Jan 15 05:44:54.038000 audit[4578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4458 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364313363653533643761646361343536336262373630616562623562 Jan 15 05:44:54.038000 audit: BPF prog-id=239 op=LOAD Jan 15 05:44:54.038000 audit[4578]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4458 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364313363653533643761646361343536336262373630616562623562 Jan 15 05:44:54.038000 audit: BPF prog-id=239 op=UNLOAD Jan 15 05:44:54.038000 audit[4578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364313363653533643761646361343536336262373630616562623562 Jan 15 05:44:54.038000 audit: BPF prog-id=238 op=UNLOAD Jan 15 05:44:54.038000 audit[4578]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364313363653533643761646361343536336262373630616562623562 Jan 15 05:44:54.039000 audit: BPF prog-id=240 op=LOAD Jan 15 05:44:54.039000 audit[4578]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4458 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364313363653533643761646361343536336262373630616562623562 Jan 15 05:44:54.108229 systemd-networkd[1504]: cali85324a7581c: Gained IPv6LL Jan 15 05:44:54.139358 containerd[1603]: time="2026-01-15T05:44:54.139090375Z" level=info msg="StartContainer for \"cd13ce53d7adca4563bb760aebb5bbffcad71ec7cfbf50397c8be88d52696137\" returns successfully" Jan 15 05:44:54.141699 containerd[1603]: time="2026-01-15T05:44:54.139623924Z" level=info msg="StartContainer for \"885c03ae18ff793711d1dea8d6a92aa53f2934aca79bbcdab937b366c9f0c54e\" returns successfully" Jan 15 05:44:54.190924 systemd-networkd[1504]: caliccf0de4b659: Link UP Jan 15 05:44:54.193898 systemd-networkd[1504]: caliccf0de4b659: Gained carrier Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:53.741 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0 calico-apiserver-766d8cc98b- calico-apiserver 1c7a473d-fbbd-41be-961a-cb9f606fd6ff 817 0 2026-01-15 05:44:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:766d8cc98b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-766d8cc98b-k6vdp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccf0de4b659 [] [] }} ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-k6vdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:53.741 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-k6vdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:53.911 [INFO][4558] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" HandleID="k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Workload="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:53.911 [INFO][4558] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" HandleID="k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Workload="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ab980), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-766d8cc98b-k6vdp", "timestamp":"2026-01-15 05:44:53.911159838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:53.911 [INFO][4558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:53.912 [INFO][4558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:53.912 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:53.959 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.002 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.034 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.054 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.075 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.075 [INFO][4558] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.087 [INFO][4558] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3 Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.103 [INFO][4558] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.150 [INFO][4558] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.150 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" host="localhost" Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.159 [INFO][4558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:44:54.258871 containerd[1603]: 2026-01-15 05:44:54.159 [INFO][4558] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" HandleID="k8s-pod-network.7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Workload="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" Jan 15 05:44:54.262916 containerd[1603]: 2026-01-15 05:44:54.180 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-k6vdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0", GenerateName:"calico-apiserver-766d8cc98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c7a473d-fbbd-41be-961a-cb9f606fd6ff", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766d8cc98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-766d8cc98b-k6vdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccf0de4b659", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:54.262916 containerd[1603]: 2026-01-15 05:44:54.180 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-k6vdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" Jan 15 05:44:54.262916 containerd[1603]: 2026-01-15 05:44:54.180 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccf0de4b659 ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-k6vdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" Jan 15 05:44:54.262916 containerd[1603]: 2026-01-15 05:44:54.192 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-k6vdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" Jan 15 05:44:54.262916 containerd[1603]: 2026-01-15 05:44:54.197 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-k6vdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0", GenerateName:"calico-apiserver-766d8cc98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c7a473d-fbbd-41be-961a-cb9f606fd6ff", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"766d8cc98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3", Pod:"calico-apiserver-766d8cc98b-k6vdp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccf0de4b659", MAC:"12:b2:32:05:26:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:54.262916 containerd[1603]: 2026-01-15 05:44:54.240 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" Namespace="calico-apiserver" Pod="calico-apiserver-766d8cc98b-k6vdp" WorkloadEndpoint="localhost-k8s-calico--apiserver--766d8cc98b--k6vdp-eth0" Jan 15 05:44:54.299000 audit[4653]: NETFILTER_CFG table=filter:131 family=2 entries=59 op=nft_register_chain pid=4653 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:54.299829 systemd-networkd[1504]: calieae99a66fd7: Gained IPv6LL Jan 15 05:44:54.299000 audit[4653]: SYSCALL arch=c000003e syscall=46 success=yes exit=29492 a0=3 a1=7ffe76a1c340 a2=0 a3=7ffe76a1c32c items=0 ppid=3946 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.299000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:54.348802 systemd-networkd[1504]: cali8a3c8fa3503: Link UP Jan 15 05:44:54.357779 systemd-networkd[1504]: cali8a3c8fa3503: Gained carrier Jan 15 05:44:54.408214 containerd[1603]: time="2026-01-15T05:44:54.407675121Z" level=info msg="connecting to shim 7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3" address="unix:///run/containerd/s/c6ca988ff8fd37044978eee6a57b792379085a0bd7b8e9b8d485b6b1a78e7873" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:53.852 [INFO][4500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--q55xd-eth0 goldmane-666569f655- calico-system 647d95db-9ea2-4c12-b24e-24a6a7b2ddc1 816 0 2026-01-15 05:44:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-q55xd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8a3c8fa3503 [] [] }} ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Namespace="calico-system" Pod="goldmane-666569f655-q55xd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q55xd-" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:53.853 [INFO][4500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Namespace="calico-system" Pod="goldmane-666569f655-q55xd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q55xd-eth0" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.100 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" HandleID="k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Workload="localhost-k8s-goldmane--666569f655--q55xd-eth0" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.102 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" HandleID="k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Workload="localhost-k8s-goldmane--666569f655--q55xd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-q55xd", "timestamp":"2026-01-15 05:44:54.100293163 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.102 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.152 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.152 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.185 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.207 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.242 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.252 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.267 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.268 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.273 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326 Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.286 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.307 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.308 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" host="localhost" Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.309 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:44:54.461094 containerd[1603]: 2026-01-15 05:44:54.309 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" HandleID="k8s-pod-network.b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Workload="localhost-k8s-goldmane--666569f655--q55xd-eth0" Jan 15 05:44:54.462012 containerd[1603]: 2026-01-15 05:44:54.324 [INFO][4500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Namespace="calico-system" Pod="goldmane-666569f655-q55xd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q55xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--q55xd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"647d95db-9ea2-4c12-b24e-24a6a7b2ddc1", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-q55xd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8a3c8fa3503", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:54.462012 containerd[1603]: 2026-01-15 05:44:54.325 [INFO][4500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Namespace="calico-system" Pod="goldmane-666569f655-q55xd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q55xd-eth0" Jan 15 05:44:54.462012 containerd[1603]: 2026-01-15 05:44:54.325 [INFO][4500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a3c8fa3503 ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Namespace="calico-system" Pod="goldmane-666569f655-q55xd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q55xd-eth0" Jan 15 05:44:54.462012 containerd[1603]: 2026-01-15 05:44:54.356 [INFO][4500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Namespace="calico-system" Pod="goldmane-666569f655-q55xd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q55xd-eth0" Jan 15 05:44:54.462012 containerd[1603]: 2026-01-15 05:44:54.359 [INFO][4500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Namespace="calico-system" Pod="goldmane-666569f655-q55xd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q55xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--q55xd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"647d95db-9ea2-4c12-b24e-24a6a7b2ddc1", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326", Pod:"goldmane-666569f655-q55xd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8a3c8fa3503", MAC:"d2:71:34:0d:a4:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:54.462012 containerd[1603]: 2026-01-15 05:44:54.404 [INFO][4500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" Namespace="calico-system" Pod="goldmane-666569f655-q55xd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q55xd-eth0" Jan 15 05:44:54.469120 systemd[1]: Started cri-containerd-7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3.scope - libcontainer container 7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3. Jan 15 05:44:54.573639 containerd[1603]: time="2026-01-15T05:44:54.573198058Z" level=info msg="connecting to shim b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326" address="unix:///run/containerd/s/886a545f1be15b9420f2bf3c5a27776c4e0c063e15ffa9826dac62e5241f0f85" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:54.573000 audit[4724]: NETFILTER_CFG table=filter:132 family=2 entries=66 op=nft_register_chain pid=4724 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:54.573000 audit[4724]: SYSCALL arch=c000003e syscall=46 success=yes exit=32768 a0=3 a1=7ffd780465d0 a2=0 a3=7ffd780465bc items=0 ppid=3946 pid=4724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.573000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:54.584558 systemd-networkd[1504]: calid67fdcc9e5e: Link UP Jan 15 05:44:54.593707 systemd-networkd[1504]: calid67fdcc9e5e: Gained carrier Jan 15 05:44:54.598000 audit: BPF prog-id=241 op=LOAD Jan 15 05:44:54.604000 audit: BPF prog-id=242 op=LOAD Jan 15 05:44:54.604000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4669 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763373737666537306238323661643563323562626430616361336365 Jan 15 05:44:54.616000 audit: BPF prog-id=242 op=UNLOAD Jan 15 05:44:54.616000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4669 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763373737666537306238323661643563323562626430616361336365 Jan 15 05:44:54.616000 audit: BPF prog-id=243 op=LOAD Jan 15 05:44:54.616000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4669 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763373737666537306238323661643563323562626430616361336365 Jan 15 05:44:54.616000 audit: BPF prog-id=244 op=LOAD Jan 15 05:44:54.616000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4669 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763373737666537306238323661643563323562626430616361336365 Jan 15 05:44:54.616000 audit: BPF prog-id=244 op=UNLOAD Jan 15 05:44:54.616000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4669 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763373737666537306238323661643563323562626430616361336365 Jan 15 05:44:54.616000 audit: BPF prog-id=243 op=UNLOAD Jan 15 05:44:54.616000 audit[4681]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4669 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763373737666537306238323661643563323562626430616361336365 Jan 15 05:44:54.616000 audit: BPF prog-id=245 op=LOAD Jan 15 05:44:54.616000 audit[4681]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4669 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763373737666537306238323661643563323562626430616361336365 Jan 15 05:44:54.634353 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:44:54.675249 systemd[1]: Started cri-containerd-b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326.scope - libcontainer container b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326. Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:53.895 [INFO][4513] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--nzlc6-eth0 csi-node-driver- calico-system e9ba42b2-88e3-4065-bd62-0b6bb90b29e9 701 0 2026-01-15 05:44:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-nzlc6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid67fdcc9e5e [] [] }} ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Namespace="calico-system" Pod="csi-node-driver-nzlc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzlc6-" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:53.903 [INFO][4513] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Namespace="calico-system" Pod="csi-node-driver-nzlc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzlc6-eth0" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.134 [INFO][4608] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" HandleID="k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Workload="localhost-k8s-csi--node--driver--nzlc6-eth0" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.136 [INFO][4608] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" HandleID="k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Workload="localhost-k8s-csi--node--driver--nzlc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049ec60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-nzlc6", "timestamp":"2026-01-15 05:44:54.134094679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.136 [INFO][4608] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.314 [INFO][4608] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.317 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.350 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.391 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.435 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.451 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.461 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.461 [INFO][4608] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.465 [INFO][4608] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.480 [INFO][4608] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.529 [INFO][4608] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.530 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" host="localhost" Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.533 [INFO][4608] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:44:54.687796 containerd[1603]: 2026-01-15 05:44:54.534 [INFO][4608] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" HandleID="k8s-pod-network.90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Workload="localhost-k8s-csi--node--driver--nzlc6-eth0" Jan 15 05:44:54.688848 containerd[1603]: 2026-01-15 05:44:54.566 [INFO][4513] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Namespace="calico-system" Pod="csi-node-driver-nzlc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzlc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nzlc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9ba42b2-88e3-4065-bd62-0b6bb90b29e9", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-nzlc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid67fdcc9e5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:54.688848 containerd[1603]: 2026-01-15 05:44:54.567 [INFO][4513] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Namespace="calico-system" Pod="csi-node-driver-nzlc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzlc6-eth0" Jan 15 05:44:54.688848 containerd[1603]: 2026-01-15 05:44:54.568 [INFO][4513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid67fdcc9e5e ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Namespace="calico-system" Pod="csi-node-driver-nzlc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzlc6-eth0" Jan 15 05:44:54.688848 containerd[1603]: 2026-01-15 05:44:54.598 [INFO][4513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Namespace="calico-system" Pod="csi-node-driver-nzlc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzlc6-eth0" Jan 15 05:44:54.688848 containerd[1603]: 2026-01-15 05:44:54.599 [INFO][4513] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Namespace="calico-system" Pod="csi-node-driver-nzlc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzlc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--nzlc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9ba42b2-88e3-4065-bd62-0b6bb90b29e9", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 44, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa", Pod:"csi-node-driver-nzlc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid67fdcc9e5e", MAC:"9e:1d:9f:b6:d5:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:44:54.688848 containerd[1603]: 2026-01-15 05:44:54.672 [INFO][4513] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" Namespace="calico-system" Pod="csi-node-driver-nzlc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--nzlc6-eth0" Jan 15 05:44:54.736000 audit: BPF prog-id=246 op=LOAD Jan 15 05:44:54.738000 audit: BPF prog-id=247 op=LOAD Jan 15 05:44:54.738000 audit[4741]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4727 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366333643339343337323631643530313964346135623362613832 Jan 15 05:44:54.739000 audit: BPF prog-id=247 op=UNLOAD Jan 15 05:44:54.739000 audit[4741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366333643339343337323631643530313964346135623362613832 Jan 15 05:44:54.740000 audit: BPF prog-id=248 op=LOAD Jan 15 05:44:54.740000 audit[4741]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4727 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366333643339343337323631643530313964346135623362613832 Jan 15 05:44:54.742000 audit: BPF prog-id=249 op=LOAD Jan 15 05:44:54.742000 audit[4741]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4727 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366333643339343337323631643530313964346135623362613832 Jan 15 05:44:54.744000 audit: BPF prog-id=249 op=UNLOAD Jan 15 05:44:54.744000 audit[4741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366333643339343337323631643530313964346135623362613832 Jan 15 05:44:54.744000 audit: BPF prog-id=248 op=UNLOAD Jan 15 05:44:54.744000 audit[4741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366333643339343337323631643530313964346135623362613832 Jan 15 05:44:54.744000 audit: BPF prog-id=250 op=LOAD Jan 15 05:44:54.744000 audit[4741]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4727 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232366333643339343337323631643530313964346135623362613832 Jan 15 05:44:54.756632 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:44:54.792335 kubelet[2755]: E0115 05:44:54.791765 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:54.800257 containerd[1603]: time="2026-01-15T05:44:54.799638322Z" level=info msg="connecting to shim 90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa" address="unix:///run/containerd/s/84fef43dbba0311205046522bbbed7ab6e26a3fa51593de194cdd34d3c0d7601" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:44:54.802805 kubelet[2755]: E0115 05:44:54.802663 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:54.805969 kubelet[2755]: E0115 05:44:54.805929 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:44:54.815915 kubelet[2755]: E0115 05:44:54.815077 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:44:54.859002 containerd[1603]: time="2026-01-15T05:44:54.858803249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-766d8cc98b-k6vdp,Uid:1c7a473d-fbbd-41be-961a-cb9f606fd6ff,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7c777fe70b826ad5c25bbd0aca3ce1e5a0c9f6add2d5a9f6cbb20efa28ae18c3\"" Jan 15 05:44:54.875602 containerd[1603]: time="2026-01-15T05:44:54.875030150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:44:54.881000 audit[4790]: NETFILTER_CFG table=filter:133 family=2 entries=52 op=nft_register_chain pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:44:54.881000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7fff3e838880 a2=0 a3=7fff3e83886c items=0 ppid=3946 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.881000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:44:54.906947 kubelet[2755]: I0115 05:44:54.904911 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wqfbb" podStartSLOduration=36.904887913 podStartE2EDuration="36.904887913s" podCreationTimestamp="2026-01-15 05:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:44:54.853030896 +0000 UTC m=+42.451073426" watchObservedRunningTime="2026-01-15 05:44:54.904887913 +0000 UTC m=+42.502930433" Jan 15 05:44:54.950000 audit[4808]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:54.950000 audit[4808]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffff282ab00 a2=0 a3=7ffff282aaec items=0 ppid=2864 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:54.957598 kubelet[2755]: I0115 05:44:54.956262 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wdqdl" podStartSLOduration=36.956239689 podStartE2EDuration="36.956239689s" podCreationTimestamp="2026-01-15 05:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:44:54.908554684 +0000 UTC m=+42.506597214" watchObservedRunningTime="2026-01-15 05:44:54.956239689 +0000 UTC m=+42.554282229" Jan 15 05:44:54.957000 audit[4808]: NETFILTER_CFG table=nat:135 family=2 entries=14 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:54.957000 audit[4808]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffff282ab00 a2=0 a3=0 items=0 ppid=2864 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:54.957000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:54.960893 containerd[1603]: time="2026-01-15T05:44:54.959638596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:44:54.966221 systemd[1]: Started cri-containerd-90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa.scope - libcontainer container 90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa. Jan 15 05:44:54.967706 containerd[1603]: time="2026-01-15T05:44:54.966387596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:44:54.970064 containerd[1603]: time="2026-01-15T05:44:54.968147140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:54.970838 kubelet[2755]: E0115 05:44:54.970763 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:44:54.970838 kubelet[2755]: E0115 05:44:54.970825 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:44:54.971082 kubelet[2755]: E0115 05:44:54.970937 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr9vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766d8cc98b-k6vdp_calico-apiserver(1c7a473d-fbbd-41be-961a-cb9f606fd6ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:44:54.972729 kubelet[2755]: E0115 05:44:54.972666 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:44:55.004629 systemd-networkd[1504]: calib3b43148a3f: Gained IPv6LL Jan 15 05:44:55.082302 containerd[1603]: time="2026-01-15T05:44:55.081915918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q55xd,Uid:647d95db-9ea2-4c12-b24e-24a6a7b2ddc1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b26c3d39437261d5019d4a5b3ba82fb607b3168eb73e8ba3266c933ac209d326\"" Jan 15 05:44:55.083000 audit: BPF prog-id=251 op=LOAD Jan 15 05:44:55.084000 audit: BPF prog-id=252 op=LOAD Jan 15 05:44:55.084000 audit[4797]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656639643538353730363931336663623137613633386336393833 Jan 15 05:44:55.084000 audit: BPF prog-id=252 op=UNLOAD Jan 15 05:44:55.084000 audit[4797]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.084000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656639643538353730363931336663623137613633386336393833 Jan 15 05:44:55.085000 audit: BPF prog-id=253 op=LOAD Jan 15 05:44:55.085000 audit[4797]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656639643538353730363931336663623137613633386336393833 Jan 15 05:44:55.087000 audit: BPF prog-id=254 op=LOAD Jan 15 05:44:55.087000 audit[4797]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656639643538353730363931336663623137613633386336393833 Jan 15 05:44:55.091000 audit: BPF prog-id=254 op=UNLOAD Jan 15 05:44:55.091000 audit[4797]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656639643538353730363931336663623137613633386336393833 Jan 15 05:44:55.092000 audit: BPF prog-id=253 op=UNLOAD Jan 15 05:44:55.092000 audit[4797]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.092000 audit[4830]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=4830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:55.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656639643538353730363931336663623137613633386336393833 Jan 15 05:44:55.092000 audit[4830]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe949bff90 a2=0 a3=7ffe949bff7c items=0 ppid=2864 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:55.095653 containerd[1603]: time="2026-01-15T05:44:55.095618257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:44:55.093000 audit: BPF prog-id=255 op=LOAD Jan 15 05:44:55.093000 audit[4797]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.093000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930656639643538353730363931336663623137613633386336393833 Jan 15 05:44:55.098799 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:44:55.104000 audit[4830]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=4830 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:55.104000 audit[4830]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe949bff90 a2=0 a3=7ffe949bff7c items=0 ppid=2864 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:55.188405 containerd[1603]: time="2026-01-15T05:44:55.188069129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nzlc6,Uid:e9ba42b2-88e3-4065-bd62-0b6bb90b29e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"90ef9d585706913fcb17a638c6983f328071e76ae6d76e933cb9650b5148b3fa\"" Jan 15 05:44:55.226804 containerd[1603]: time="2026-01-15T05:44:55.226642122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:44:55.230348 containerd[1603]: time="2026-01-15T05:44:55.230164384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:44:55.230348 containerd[1603]: time="2026-01-15T05:44:55.230273908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:55.230895 kubelet[2755]: E0115 05:44:55.230630 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:44:55.230895 kubelet[2755]: E0115 05:44:55.230687 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:44:55.231246 kubelet[2755]: E0115 05:44:55.230956 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jn8fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q55xd_calico-system(647d95db-9ea2-4c12-b24e-24a6a7b2ddc1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:44:55.231963 containerd[1603]: time="2026-01-15T05:44:55.231842394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:44:55.232499 kubelet[2755]: E0115 05:44:55.232268 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:44:55.260025 systemd-networkd[1504]: cali94d18b17aca: Gained IPv6LL Jan 15 05:44:55.291063 containerd[1603]: time="2026-01-15T05:44:55.290934518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:44:55.294755 containerd[1603]: time="2026-01-15T05:44:55.294404262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:44:55.294755 containerd[1603]: time="2026-01-15T05:44:55.294616629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:55.295333 kubelet[2755]: E0115 05:44:55.294917 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:44:55.295333 kubelet[2755]: E0115 05:44:55.294968 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:44:55.295333 kubelet[2755]: E0115 05:44:55.295094 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wcxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nzlc6_calico-system(e9ba42b2-88e3-4065-bd62-0b6bb90b29e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:44:55.301392 containerd[1603]: time="2026-01-15T05:44:55.301202506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:44:55.370630 containerd[1603]: time="2026-01-15T05:44:55.370301228Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:44:55.374815 containerd[1603]: time="2026-01-15T05:44:55.374596742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:44:55.374815 containerd[1603]: time="2026-01-15T05:44:55.374690307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:44:55.374978 kubelet[2755]: E0115 05:44:55.374928 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:44:55.375071 kubelet[2755]: E0115 05:44:55.374987 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:44:55.375133 kubelet[2755]: E0115 05:44:55.375098 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wcxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nzlc6_calico-system(e9ba42b2-88e3-4065-bd62-0b6bb90b29e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:44:55.376899 kubelet[2755]: E0115 05:44:55.376788 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:44:55.579718 systemd-networkd[1504]: caliccf0de4b659: Gained IPv6LL Jan 15 05:44:55.825698 kubelet[2755]: E0115 05:44:55.810362 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:44:55.839043 kubelet[2755]: E0115 05:44:55.836853 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:55.839043 kubelet[2755]: E0115 05:44:55.838595 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:44:55.839043 kubelet[2755]: E0115 05:44:55.838780 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:55.839043 kubelet[2755]: E0115 05:44:55.838892 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:44:55.839043 kubelet[2755]: E0115 05:44:55.838920 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:44:55.915000 audit[4838]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:55.915000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc48d99b60 a2=0 a3=7ffc48d99b4c items=0 ppid=2864 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.915000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:55.961000 audit[4838]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:55.961000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc48d99b60 a2=0 a3=7ffc48d99b4c items=0 ppid=2864 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:55.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:56.092031 systemd-networkd[1504]: cali8a3c8fa3503: Gained IPv6LL Jan 15 05:44:56.480601 systemd-networkd[1504]: calid67fdcc9e5e: Gained IPv6LL Jan 15 05:44:56.843849 kubelet[2755]: E0115 05:44:56.843356 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:56.848156 kubelet[2755]: E0115 05:44:56.846365 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:56.848378 kubelet[2755]: E0115 05:44:56.846816 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:44:56.848378 kubelet[2755]: E0115 05:44:56.846866 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:44:56.848378 kubelet[2755]: E0115 05:44:56.847324 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:44:57.024764 kubelet[2755]: I0115 05:44:57.023875 2755 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 05:44:57.026862 kubelet[2755]: E0115 05:44:57.026837 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:44:57.029000 audit[4841]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:57.029000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb66f91c0 a2=0 a3=7ffcb66f91ac items=0 ppid=2864 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:57.029000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:57.042000 audit[4841]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:44:57.042000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcb66f91c0 a2=0 a3=7ffcb66f91ac items=0 ppid=2864 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:44:57.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:44:57.847360 kubelet[2755]: E0115 05:44:57.847122 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:45:02.538824 containerd[1603]: time="2026-01-15T05:45:02.538738339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:45:02.607114 containerd[1603]: time="2026-01-15T05:45:02.606859466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:02.609049 containerd[1603]: time="2026-01-15T05:45:02.608927816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:45:02.609049 containerd[1603]: time="2026-01-15T05:45:02.608990813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:02.609388 kubelet[2755]: E0115 05:45:02.609253 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:45:02.609388 kubelet[2755]: E0115 05:45:02.609305 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:45:02.612808 kubelet[2755]: E0115 05:45:02.612708 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6de01969a6dc416d90e12cf9d6829f2f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rb2v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66f8b465fb-vtc8s_calico-system(99bf8dba-e773-442c-a667-c161cf0a56cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:02.628640 containerd[1603]: time="2026-01-15T05:45:02.624061709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:45:02.688208 containerd[1603]: time="2026-01-15T05:45:02.687894611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:02.690304 containerd[1603]: time="2026-01-15T05:45:02.690114493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:45:02.690304 containerd[1603]: time="2026-01-15T05:45:02.690245918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:02.690814 kubelet[2755]: E0115 05:45:02.690726 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:45:02.690814 kubelet[2755]: E0115 05:45:02.690782 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:45:02.691114 kubelet[2755]: E0115 05:45:02.690913 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb2v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66f8b465fb-vtc8s_calico-system(99bf8dba-e773-442c-a667-c161cf0a56cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:02.693099 kubelet[2755]: E0115 05:45:02.692928 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:45:07.534377 containerd[1603]: time="2026-01-15T05:45:07.534244361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:45:07.595180 containerd[1603]: time="2026-01-15T05:45:07.594969183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:07.597310 containerd[1603]: time="2026-01-15T05:45:07.597251224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:45:07.597599 containerd[1603]: time="2026-01-15T05:45:07.597350169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:07.597716 kubelet[2755]: E0115 05:45:07.597643 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:45:07.597716 kubelet[2755]: E0115 05:45:07.597692 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:45:07.598157 kubelet[2755]: E0115 05:45:07.597792 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wcxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nzlc6_calico-system(e9ba42b2-88e3-4065-bd62-0b6bb90b29e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:07.600311 containerd[1603]: time="2026-01-15T05:45:07.600258867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:45:07.665776 containerd[1603]: time="2026-01-15T05:45:07.665660641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:07.667735 containerd[1603]: time="2026-01-15T05:45:07.667577645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:45:07.667735 containerd[1603]: time="2026-01-15T05:45:07.667649384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:07.668467 kubelet[2755]: E0115 05:45:07.668200 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:45:07.668693 kubelet[2755]: E0115 05:45:07.668624 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:45:07.668933 kubelet[2755]: E0115 05:45:07.668849 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wcxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nzlc6_calico-system(e9ba42b2-88e3-4065-bd62-0b6bb90b29e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:07.670139 kubelet[2755]: E0115 05:45:07.670088 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:45:09.524822 containerd[1603]: time="2026-01-15T05:45:09.524772046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:45:09.588863 containerd[1603]: time="2026-01-15T05:45:09.588720014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:09.590921 containerd[1603]: time="2026-01-15T05:45:09.590784260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:45:09.590921 containerd[1603]: time="2026-01-15T05:45:09.590884056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:09.591336 kubelet[2755]: E0115 05:45:09.591292 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:45:09.592217 kubelet[2755]: E0115 05:45:09.591350 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:45:09.592217 kubelet[2755]: E0115 05:45:09.591886 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x6cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766d8cc98b-vvx8x_calico-apiserver(bad082bb-9731-4bff-b64e-9964fb68119a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:09.592753 containerd[1603]: time="2026-01-15T05:45:09.592157407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:45:09.593678 kubelet[2755]: E0115 05:45:09.593617 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:45:09.660485 containerd[1603]: time="2026-01-15T05:45:09.660266253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:09.662231 containerd[1603]: time="2026-01-15T05:45:09.662147547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:45:09.662231 containerd[1603]: time="2026-01-15T05:45:09.662204937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:09.662797 kubelet[2755]: E0115 05:45:09.662593 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:45:09.662797 kubelet[2755]: E0115 05:45:09.662683 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:45:09.662928 kubelet[2755]: E0115 05:45:09.662828 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr9vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766d8cc98b-k6vdp_calico-apiserver(1c7a473d-fbbd-41be-961a-cb9f606fd6ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:09.664752 kubelet[2755]: E0115 05:45:09.664653 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:45:10.526548 containerd[1603]: time="2026-01-15T05:45:10.526246577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:45:10.595652 containerd[1603]: time="2026-01-15T05:45:10.595274172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:10.597575 containerd[1603]: time="2026-01-15T05:45:10.597468838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:45:10.597775 containerd[1603]: time="2026-01-15T05:45:10.597564488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:10.597971 kubelet[2755]: E0115 05:45:10.597898 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:45:10.597971 kubelet[2755]: E0115 05:45:10.597964 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:45:10.598625 kubelet[2755]: E0115 05:45:10.598100 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgxmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c8cc478f5-z859x_calico-system(c9216aea-9a46-4a8f-81a9-8d30cdf7722b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:10.599324 kubelet[2755]: E0115 05:45:10.599267 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:45:11.528154 containerd[1603]: time="2026-01-15T05:45:11.527830934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:45:11.592915 containerd[1603]: time="2026-01-15T05:45:11.592713979Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:11.595290 containerd[1603]: time="2026-01-15T05:45:11.595135904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:45:11.595290 containerd[1603]: time="2026-01-15T05:45:11.595243615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:11.600714 kubelet[2755]: E0115 05:45:11.595649 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:45:11.600714 kubelet[2755]: E0115 05:45:11.595691 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:45:11.600714 kubelet[2755]: E0115 05:45:11.595797 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jn8fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q55xd_calico-system(647d95db-9ea2-4c12-b24e-24a6a7b2ddc1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:11.600714 kubelet[2755]: E0115 05:45:11.597166 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:45:13.526754 kubelet[2755]: E0115 05:45:13.526592 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:45:15.684353 systemd[1]: Started sshd@7-10.0.0.123:22-10.0.0.1:41274.service - OpenSSH per-connection server daemon (10.0.0.1:41274). Jan 15 05:45:15.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.123:22-10.0.0.1:41274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:15.695387 kernel: kauditd_printk_skb: 233 callbacks suppressed Jan 15 05:45:15.695625 kernel: audit: type=1130 audit(1768455915.684:733): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.123:22-10.0.0.1:41274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:15.835000 audit[4919]: USER_ACCT pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:15.839182 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:15.849391 sshd[4919]: Accepted publickey for core from 10.0.0.1 port 41274 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:15.851912 kernel: audit: type=1101 audit(1768455915.835:734): pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:15.852054 kernel: audit: type=1103 audit(1768455915.837:735): pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:15.837000 audit[4919]: CRED_ACQ pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:15.858848 systemd-logind[1578]: New session 9 of user core. Jan 15 05:45:15.837000 audit[4919]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde50f4e60 a2=3 a3=0 items=0 ppid=1 pid=4919 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:15.883678 kernel: audit: type=1006 audit(1768455915.837:736): pid=4919 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 15 05:45:15.883779 kernel: audit: type=1300 audit(1768455915.837:736): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde50f4e60 a2=3 a3=0 items=0 ppid=1 pid=4919 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:15.837000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:15.888292 kernel: audit: type=1327 audit(1768455915.837:736): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:15.888822 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 05:45:15.894000 audit[4919]: USER_START pid=4919 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:15.916511 kernel: audit: type=1105 audit(1768455915.894:737): pid=4919 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:15.897000 audit[4923]: CRED_ACQ pid=4923 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:15.934558 kernel: audit: type=1103 audit(1768455915.897:738): pid=4923 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:16.074142 sshd[4923]: Connection closed by 10.0.0.1 port 41274 Jan 15 05:45:16.074890 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:16.078000 audit[4919]: USER_END pid=4919 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:16.082167 systemd[1]: sshd@7-10.0.0.123:22-10.0.0.1:41274.service: Deactivated successfully. Jan 15 05:45:16.085405 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 05:45:16.088051 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Jan 15 05:45:16.090953 systemd-logind[1578]: Removed session 9. Jan 15 05:45:16.078000 audit[4919]: CRED_DISP pid=4919 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:16.102479 kernel: audit: type=1106 audit(1768455916.078:739): pid=4919 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:16.102650 kernel: audit: type=1104 audit(1768455916.078:740): pid=4919 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:16.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.123:22-10.0.0.1:41274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:20.527284 kubelet[2755]: E0115 05:45:20.527192 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:45:21.090906 systemd[1]: Started sshd@8-10.0.0.123:22-10.0.0.1:41284.service - OpenSSH per-connection server daemon (10.0.0.1:41284). Jan 15 05:45:21.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.123:22-10.0.0.1:41284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:21.093604 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:45:21.093691 kernel: audit: type=1130 audit(1768455921.090:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.123:22-10.0.0.1:41284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:21.171000 audit[4940]: USER_ACCT pid=4940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.172639 sshd[4940]: Accepted publickey for core from 10.0.0.1 port 41284 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:21.174931 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:21.182616 systemd-logind[1578]: New session 10 of user core. Jan 15 05:45:21.173000 audit[4940]: CRED_ACQ pid=4940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.192596 kernel: audit: type=1101 audit(1768455921.171:743): pid=4940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.192703 kernel: audit: type=1103 audit(1768455921.173:744): pid=4940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.192742 kernel: audit: type=1006 audit(1768455921.173:745): pid=4940 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 15 05:45:21.173000 audit[4940]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc44519270 a2=3 a3=0 items=0 ppid=1 pid=4940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:21.201937 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 05:45:21.211597 kernel: audit: type=1300 audit(1768455921.173:745): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc44519270 a2=3 a3=0 items=0 ppid=1 pid=4940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:21.231185 kernel: audit: type=1327 audit(1768455921.173:745): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:21.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:21.231574 kernel: audit: type=1105 audit(1768455921.207:746): pid=4940 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.207000 audit[4940]: USER_START pid=4940 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.245655 kernel: audit: type=1103 audit(1768455921.210:747): pid=4944 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.210000 audit[4944]: CRED_ACQ pid=4944 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.365978 sshd[4944]: Connection closed by 10.0.0.1 port 41284 Jan 15 05:45:21.366040 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:21.368000 audit[4940]: USER_END pid=4940 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.374079 systemd[1]: sshd@8-10.0.0.123:22-10.0.0.1:41284.service: Deactivated successfully. Jan 15 05:45:21.378255 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 05:45:21.380183 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Jan 15 05:45:21.382488 kernel: audit: type=1106 audit(1768455921.368:748): pid=4940 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.382594 kernel: audit: type=1104 audit(1768455921.368:749): pid=4940 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.368000 audit[4940]: CRED_DISP pid=4940 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:21.383743 systemd-logind[1578]: Removed session 10. Jan 15 05:45:21.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.123:22-10.0.0.1:41284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:21.536352 kubelet[2755]: E0115 05:45:21.536083 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:45:22.529409 kubelet[2755]: E0115 05:45:22.529294 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:45:23.525205 kubelet[2755]: E0115 05:45:23.525127 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:45:25.525607 containerd[1603]: time="2026-01-15T05:45:25.524786259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:45:25.602973 containerd[1603]: time="2026-01-15T05:45:25.602896076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:25.604546 containerd[1603]: time="2026-01-15T05:45:25.604383194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:45:25.604647 containerd[1603]: time="2026-01-15T05:45:25.604587884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:25.604984 kubelet[2755]: E0115 05:45:25.604845 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:45:25.604984 kubelet[2755]: E0115 05:45:25.604907 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:45:25.605410 kubelet[2755]: E0115 05:45:25.605057 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6de01969a6dc416d90e12cf9d6829f2f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rb2v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66f8b465fb-vtc8s_calico-system(99bf8dba-e773-442c-a667-c161cf0a56cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:25.608044 containerd[1603]: time="2026-01-15T05:45:25.607978603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:45:25.677357 containerd[1603]: time="2026-01-15T05:45:25.677272461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:25.678972 containerd[1603]: time="2026-01-15T05:45:25.678850143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:45:25.678972 containerd[1603]: time="2026-01-15T05:45:25.678954552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:25.679195 kubelet[2755]: E0115 05:45:25.679097 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:45:25.679195 kubelet[2755]: E0115 05:45:25.679169 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:45:25.679459 kubelet[2755]: E0115 05:45:25.679263 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb2v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66f8b465fb-vtc8s_calico-system(99bf8dba-e773-442c-a667-c161cf0a56cd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:25.680875 kubelet[2755]: E0115 05:45:25.680769 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:45:26.394498 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:45:26.394653 kernel: audit: type=1130 audit(1768455926.383:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.123:22-10.0.0.1:60762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:26.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.123:22-10.0.0.1:60762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:26.385079 systemd[1]: Started sshd@9-10.0.0.123:22-10.0.0.1:60762.service - OpenSSH per-connection server daemon (10.0.0.1:60762). Jan 15 05:45:26.470000 audit[4962]: USER_ACCT pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.475268 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:26.478072 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 60762 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:26.482799 systemd-logind[1578]: New session 11 of user core. Jan 15 05:45:26.472000 audit[4962]: CRED_ACQ pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.498986 kernel: audit: type=1101 audit(1768455926.470:752): pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.499094 kernel: audit: type=1103 audit(1768455926.472:753): pid=4962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.506831 kernel: audit: type=1006 audit(1768455926.472:754): pid=4962 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 15 05:45:26.506960 kernel: audit: type=1300 audit(1768455926.472:754): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff576868a0 a2=3 a3=0 items=0 ppid=1 pid=4962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:26.472000 audit[4962]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff576868a0 a2=3 a3=0 items=0 ppid=1 pid=4962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:26.523413 kernel: audit: type=1327 audit(1768455926.472:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:26.472000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:26.525763 kubelet[2755]: E0115 05:45:26.525159 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:45:26.530898 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 05:45:26.540000 audit[4962]: USER_START pid=4962 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.558541 kernel: audit: type=1105 audit(1768455926.540:755): pid=4962 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.544000 audit[4966]: CRED_ACQ pid=4966 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.570568 kernel: audit: type=1103 audit(1768455926.544:756): pid=4966 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.685488 sshd[4966]: Connection closed by 10.0.0.1 port 60762 Jan 15 05:45:26.686675 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:26.687000 audit[4962]: USER_END pid=4962 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.693335 systemd[1]: sshd@9-10.0.0.123:22-10.0.0.1:60762.service: Deactivated successfully. Jan 15 05:45:26.698284 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 05:45:26.703025 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Jan 15 05:45:26.705170 systemd-logind[1578]: Removed session 11. Jan 15 05:45:26.687000 audit[4962]: CRED_DISP pid=4962 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.732135 kernel: audit: type=1106 audit(1768455926.687:757): pid=4962 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.732651 kernel: audit: type=1104 audit(1768455926.687:758): pid=4962 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:26.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.123:22-10.0.0.1:60762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:31.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.123:22-10.0.0.1:60778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:31.702133 systemd[1]: Started sshd@10-10.0.0.123:22-10.0.0.1:60778.service - OpenSSH per-connection server daemon (10.0.0.1:60778). Jan 15 05:45:31.705579 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:45:31.705663 kernel: audit: type=1130 audit(1768455931.700:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.123:22-10.0.0.1:60778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:31.891000 audit[5011]: USER_ACCT pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:31.897217 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:31.901113 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 60778 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:31.894000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:31.914328 systemd-logind[1578]: New session 12 of user core. Jan 15 05:45:31.923805 kernel: audit: type=1101 audit(1768455931.891:761): pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:31.923908 kernel: audit: type=1103 audit(1768455931.894:762): pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:31.926814 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 05:45:31.932863 kernel: audit: type=1006 audit(1768455931.894:763): pid=5011 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 15 05:45:31.933060 kernel: audit: type=1300 audit(1768455931.894:763): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecc10d3e0 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:31.894000 audit[5011]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecc10d3e0 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:31.945765 kernel: audit: type=1327 audit(1768455931.894:763): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:31.894000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:31.932000 audit[5011]: USER_START pid=5011 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:31.967594 kernel: audit: type=1105 audit(1768455931.932:764): pid=5011 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:31.936000 audit[5015]: CRED_ACQ pid=5015 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:31.987572 kernel: audit: type=1103 audit(1768455931.936:765): pid=5015 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.100647 sshd[5015]: Connection closed by 10.0.0.1 port 60778 Jan 15 05:45:32.101858 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:32.104000 audit[5011]: USER_END pid=5011 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.125768 kernel: audit: type=1106 audit(1768455932.104:766): pid=5011 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.104000 audit[5011]: CRED_DISP pid=5011 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.128378 systemd[1]: sshd@10-10.0.0.123:22-10.0.0.1:60778.service: Deactivated successfully. Jan 15 05:45:32.132810 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 05:45:32.135904 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Jan 15 05:45:32.138547 kernel: audit: type=1104 audit(1768455932.104:767): pid=5011 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.123:22-10.0.0.1:60778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:32.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.123:22-10.0.0.1:60780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:32.142761 systemd[1]: Started sshd@11-10.0.0.123:22-10.0.0.1:60780.service - OpenSSH per-connection server daemon (10.0.0.1:60780). Jan 15 05:45:32.145289 systemd-logind[1578]: Removed session 12. Jan 15 05:45:32.224000 audit[5030]: USER_ACCT pid=5030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.226717 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 60780 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:32.230253 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:32.226000 audit[5030]: CRED_ACQ pid=5030 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.226000 audit[5030]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb3656b40 a2=3 a3=0 items=0 ppid=1 pid=5030 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:32.226000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:32.238388 systemd-logind[1578]: New session 13 of user core. Jan 15 05:45:32.247876 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 05:45:32.251000 audit[5030]: USER_START pid=5030 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.254000 audit[5034]: CRED_ACQ pid=5034 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.466704 sshd[5034]: Connection closed by 10.0.0.1 port 60780 Jan 15 05:45:32.467789 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:32.469000 audit[5030]: USER_END pid=5030 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.470000 audit[5030]: CRED_DISP pid=5030 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.482082 systemd[1]: sshd@11-10.0.0.123:22-10.0.0.1:60780.service: Deactivated successfully. Jan 15 05:45:32.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.123:22-10.0.0.1:60780 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:32.490667 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 05:45:32.499946 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Jan 15 05:45:32.504748 systemd[1]: Started sshd@12-10.0.0.123:22-10.0.0.1:59924.service - OpenSSH per-connection server daemon (10.0.0.1:59924). Jan 15 05:45:32.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.123:22-10.0.0.1:59924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:32.507932 systemd-logind[1578]: Removed session 13. Jan 15 05:45:32.593000 audit[5046]: USER_ACCT pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.596129 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 59924 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:32.596000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.597000 audit[5046]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddeec3530 a2=3 a3=0 items=0 ppid=1 pid=5046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:32.597000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:32.600014 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:32.611518 systemd-logind[1578]: New session 14 of user core. Jan 15 05:45:32.618056 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 05:45:32.628000 audit[5046]: USER_START pid=5046 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.630000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.745592 sshd[5050]: Connection closed by 10.0.0.1 port 59924 Jan 15 05:45:32.746748 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:32.747000 audit[5046]: USER_END pid=5046 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.747000 audit[5046]: CRED_DISP pid=5046 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:32.755039 systemd[1]: sshd@12-10.0.0.123:22-10.0.0.1:59924.service: Deactivated successfully. Jan 15 05:45:32.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.123:22-10.0.0.1:59924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:32.759175 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 05:45:32.762785 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Jan 15 05:45:32.766397 systemd-logind[1578]: Removed session 14. Jan 15 05:45:33.526999 containerd[1603]: time="2026-01-15T05:45:33.526872322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:45:33.584843 containerd[1603]: time="2026-01-15T05:45:33.584689401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:33.587715 containerd[1603]: time="2026-01-15T05:45:33.587653067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:45:33.587715 containerd[1603]: time="2026-01-15T05:45:33.587742393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:33.588061 kubelet[2755]: E0115 05:45:33.587855 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:45:33.588061 kubelet[2755]: E0115 05:45:33.587895 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:45:33.588061 kubelet[2755]: E0115 05:45:33.587986 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x6cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766d8cc98b-vvx8x_calico-apiserver(bad082bb-9731-4bff-b64e-9964fb68119a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:33.591075 kubelet[2755]: E0115 05:45:33.590906 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:45:34.531546 containerd[1603]: time="2026-01-15T05:45:34.531393696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:45:34.611534 containerd[1603]: time="2026-01-15T05:45:34.611288012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:34.614145 containerd[1603]: time="2026-01-15T05:45:34.613935566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:45:34.614145 containerd[1603]: time="2026-01-15T05:45:34.614090463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:34.616733 kubelet[2755]: E0115 05:45:34.615042 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:45:34.616733 kubelet[2755]: E0115 05:45:34.615814 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:45:34.616733 kubelet[2755]: E0115 05:45:34.616139 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgxmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c8cc478f5-z859x_calico-system(c9216aea-9a46-4a8f-81a9-8d30cdf7722b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:34.617733 kubelet[2755]: E0115 05:45:34.617370 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:45:34.618854 containerd[1603]: time="2026-01-15T05:45:34.618406740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:45:34.687158 containerd[1603]: time="2026-01-15T05:45:34.687020982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:34.690094 containerd[1603]: time="2026-01-15T05:45:34.689894015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:45:34.690537 containerd[1603]: time="2026-01-15T05:45:34.690006863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:34.691761 kubelet[2755]: E0115 05:45:34.691662 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:45:34.691828 kubelet[2755]: E0115 05:45:34.691765 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:45:34.693107 kubelet[2755]: E0115 05:45:34.692590 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wcxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nzlc6_calico-system(e9ba42b2-88e3-4065-bd62-0b6bb90b29e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:34.697737 containerd[1603]: time="2026-01-15T05:45:34.697688278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:45:34.763146 containerd[1603]: time="2026-01-15T05:45:34.762932443Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:34.765273 containerd[1603]: time="2026-01-15T05:45:34.765102142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:45:34.765273 containerd[1603]: time="2026-01-15T05:45:34.765164297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:34.765932 kubelet[2755]: E0115 05:45:34.765776 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:45:34.765932 kubelet[2755]: E0115 05:45:34.765868 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:45:34.766141 kubelet[2755]: E0115 05:45:34.766064 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wcxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-nzlc6_calico-system(e9ba42b2-88e3-4065-bd62-0b6bb90b29e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:34.767555 kubelet[2755]: E0115 05:45:34.767340 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:45:35.524322 containerd[1603]: time="2026-01-15T05:45:35.523794999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:45:35.606243 containerd[1603]: time="2026-01-15T05:45:35.606131463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:35.608083 containerd[1603]: time="2026-01-15T05:45:35.607955234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:45:35.608083 containerd[1603]: time="2026-01-15T05:45:35.608079375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:35.608348 kubelet[2755]: E0115 05:45:35.608283 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:45:35.608555 kubelet[2755]: E0115 05:45:35.608359 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:45:35.608721 kubelet[2755]: E0115 05:45:35.608610 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr9vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766d8cc98b-k6vdp_calico-apiserver(1c7a473d-fbbd-41be-961a-cb9f606fd6ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:35.609858 kubelet[2755]: E0115 05:45:35.609770 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:45:37.523047 kubelet[2755]: E0115 05:45:37.522777 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:45:37.523047 kubelet[2755]: E0115 05:45:37.523052 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:45:37.525103 kubelet[2755]: E0115 05:45:37.525018 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:45:37.764645 systemd[1]: Started sshd@13-10.0.0.123:22-10.0.0.1:59932.service - OpenSSH per-connection server daemon (10.0.0.1:59932). Jan 15 05:45:37.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.123:22-10.0.0.1:59932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:37.780916 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 15 05:45:37.781021 kernel: audit: type=1130 audit(1768455937.763:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.123:22-10.0.0.1:59932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:37.849000 audit[5065]: USER_ACCT pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:37.853675 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:37.864343 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 59932 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:37.864668 kernel: audit: type=1101 audit(1768455937.849:788): pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:37.850000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:37.867525 systemd-logind[1578]: New session 15 of user core. Jan 15 05:45:37.883685 kernel: audit: type=1103 audit(1768455937.850:789): pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:37.883780 kernel: audit: type=1006 audit(1768455937.851:790): pid=5065 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 15 05:45:37.883821 kernel: audit: type=1300 audit(1768455937.851:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc291fb8c0 a2=3 a3=0 items=0 ppid=1 pid=5065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:37.851000 audit[5065]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc291fb8c0 a2=3 a3=0 items=0 ppid=1 pid=5065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:37.851000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:37.902832 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 05:45:37.907769 kernel: audit: type=1327 audit(1768455937.851:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:37.909000 audit[5065]: USER_START pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:37.933829 kernel: audit: type=1105 audit(1768455937.909:791): pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:37.933000 audit[5069]: CRED_ACQ pid=5069 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:37.950147 kernel: audit: type=1103 audit(1768455937.933:792): pid=5069 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:38.042863 sshd[5069]: Connection closed by 10.0.0.1 port 59932 Jan 15 05:45:38.043131 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:38.045000 audit[5065]: USER_END pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:38.051062 systemd[1]: sshd@13-10.0.0.123:22-10.0.0.1:59932.service: Deactivated successfully. Jan 15 05:45:38.056039 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 05:45:38.058784 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Jan 15 05:45:38.060532 systemd-logind[1578]: Removed session 15. Jan 15 05:45:38.045000 audit[5065]: CRED_DISP pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:38.078329 kernel: audit: type=1106 audit(1768455938.045:793): pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:38.078413 kernel: audit: type=1104 audit(1768455938.045:794): pid=5065 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:38.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.123:22-10.0.0.1:59932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:38.528413 kubelet[2755]: E0115 05:45:38.528089 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:45:40.522381 kubelet[2755]: E0115 05:45:40.522287 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:45:41.525058 containerd[1603]: time="2026-01-15T05:45:41.524936921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:45:41.602830 containerd[1603]: time="2026-01-15T05:45:41.602750325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:45:41.605387 containerd[1603]: time="2026-01-15T05:45:41.605314312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:45:41.605387 containerd[1603]: time="2026-01-15T05:45:41.605377079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:45:41.606051 kubelet[2755]: E0115 05:45:41.605941 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:45:41.606051 kubelet[2755]: E0115 05:45:41.605993 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:45:41.606762 kubelet[2755]: E0115 05:45:41.606103 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jn8fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q55xd_calico-system(647d95db-9ea2-4c12-b24e-24a6a7b2ddc1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:45:41.607337 kubelet[2755]: E0115 05:45:41.607182 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:45:43.081612 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:45:43.081702 kernel: audit: type=1130 audit(1768455943.062:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.123:22-10.0.0.1:42568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:43.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.123:22-10.0.0.1:42568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:43.063956 systemd[1]: Started sshd@14-10.0.0.123:22-10.0.0.1:42568.service - OpenSSH per-connection server daemon (10.0.0.1:42568). Jan 15 05:45:43.173000 audit[5086]: USER_ACCT pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.177734 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 42568 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:43.180725 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:43.176000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.190650 systemd-logind[1578]: New session 16 of user core. Jan 15 05:45:43.196821 kernel: audit: type=1101 audit(1768455943.173:797): pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.197114 kernel: audit: type=1103 audit(1768455943.176:798): pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.203703 kernel: audit: type=1006 audit(1768455943.176:799): pid=5086 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 15 05:45:43.203816 kernel: audit: type=1300 audit(1768455943.176:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffd2ab740 a2=3 a3=0 items=0 ppid=1 pid=5086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:43.176000 audit[5086]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffd2ab740 a2=3 a3=0 items=0 ppid=1 pid=5086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:43.217590 kernel: audit: type=1327 audit(1768455943.176:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:43.176000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:43.217834 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 05:45:43.222000 audit[5086]: USER_START pid=5086 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.237575 kernel: audit: type=1105 audit(1768455943.222:800): pid=5086 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.224000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.249566 kernel: audit: type=1103 audit(1768455943.224:801): pid=5090 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.337043 sshd[5090]: Connection closed by 10.0.0.1 port 42568 Jan 15 05:45:43.337397 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:43.337000 audit[5086]: USER_END pid=5086 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.343087 systemd[1]: sshd@14-10.0.0.123:22-10.0.0.1:42568.service: Deactivated successfully. Jan 15 05:45:43.346403 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 05:45:43.353578 kernel: audit: type=1106 audit(1768455943.337:802): pid=5086 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.338000 audit[5086]: CRED_DISP pid=5086 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.354549 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Jan 15 05:45:43.358247 systemd-logind[1578]: Removed session 16. Jan 15 05:45:43.364545 kernel: audit: type=1104 audit(1768455943.338:803): pid=5086 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:43.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.123:22-10.0.0.1:42568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:46.542026 kubelet[2755]: E0115 05:45:46.541725 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:45:46.542026 kubelet[2755]: E0115 05:45:46.541885 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:45:48.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.123:22-10.0.0.1:42578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:48.358756 systemd[1]: Started sshd@15-10.0.0.123:22-10.0.0.1:42578.service - OpenSSH per-connection server daemon (10.0.0.1:42578). Jan 15 05:45:48.361162 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:45:48.361249 kernel: audit: type=1130 audit(1768455948.357:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.123:22-10.0.0.1:42578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:48.442000 audit[5103]: USER_ACCT pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.443974 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 42578 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:48.446949 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:48.443000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.464307 systemd-logind[1578]: New session 17 of user core. Jan 15 05:45:48.467616 kernel: audit: type=1101 audit(1768455948.442:806): pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.467703 kernel: audit: type=1103 audit(1768455948.443:807): pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.467740 kernel: audit: type=1006 audit(1768455948.443:808): pid=5103 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 15 05:45:48.443000 audit[5103]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff77e3ac70 a2=3 a3=0 items=0 ppid=1 pid=5103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:48.493694 kernel: audit: type=1300 audit(1768455948.443:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff77e3ac70 a2=3 a3=0 items=0 ppid=1 pid=5103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:48.493840 kernel: audit: type=1327 audit(1768455948.443:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:48.443000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:48.505134 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 05:45:48.509000 audit[5103]: USER_START pid=5103 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.526248 kubelet[2755]: E0115 05:45:48.526082 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:45:48.528695 kubelet[2755]: E0115 05:45:48.527962 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:45:48.538618 kernel: audit: type=1105 audit(1768455948.509:809): pid=5103 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.512000 audit[5108]: CRED_ACQ pid=5108 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.553550 kernel: audit: type=1103 audit(1768455948.512:810): pid=5108 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.779297 sshd[5108]: Connection closed by 10.0.0.1 port 42578 Jan 15 05:45:48.780738 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:48.781000 audit[5103]: USER_END pid=5103 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.794319 systemd[1]: sshd@15-10.0.0.123:22-10.0.0.1:42578.service: Deactivated successfully. Jan 15 05:45:48.799182 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 05:45:48.782000 audit[5103]: CRED_DISP pid=5103 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.802556 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Jan 15 05:45:48.804297 systemd-logind[1578]: Removed session 17. Jan 15 05:45:48.815976 kernel: audit: type=1106 audit(1768455948.781:811): pid=5103 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.816092 kernel: audit: type=1104 audit(1768455948.782:812): pid=5103 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:48.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.123:22-10.0.0.1:42578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:49.525261 kubelet[2755]: E0115 05:45:49.525080 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:45:49.526986 kubelet[2755]: E0115 05:45:49.526790 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:45:53.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.123:22-10.0.0.1:55196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:53.800812 systemd[1]: Started sshd@16-10.0.0.123:22-10.0.0.1:55196.service - OpenSSH per-connection server daemon (10.0.0.1:55196). Jan 15 05:45:53.804530 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:45:53.804609 kernel: audit: type=1130 audit(1768455953.799:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.123:22-10.0.0.1:55196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:53.926592 sshd[5127]: Accepted publickey for core from 10.0.0.1 port 55196 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:53.924000 audit[5127]: USER_ACCT pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:53.930840 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:53.927000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:53.941228 systemd-logind[1578]: New session 18 of user core. Jan 15 05:45:53.952965 kernel: audit: type=1101 audit(1768455953.924:815): pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:53.953039 kernel: audit: type=1103 audit(1768455953.927:816): pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:53.953067 kernel: audit: type=1006 audit(1768455953.928:817): pid=5127 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 15 05:45:53.928000 audit[5127]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef7055a60 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:53.978584 kernel: audit: type=1300 audit(1768455953.928:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef7055a60 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:53.978723 kernel: audit: type=1327 audit(1768455953.928:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:53.928000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:53.979781 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 05:45:53.984000 audit[5127]: USER_START pid=5127 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:53.985000 audit[5131]: CRED_ACQ pid=5131 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.016785 kernel: audit: type=1105 audit(1768455953.984:818): pid=5127 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.016910 kernel: audit: type=1103 audit(1768455953.985:819): pid=5131 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.167909 sshd[5131]: Connection closed by 10.0.0.1 port 55196 Jan 15 05:45:54.170357 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:54.171000 audit[5127]: USER_END pid=5127 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.172000 audit[5127]: CRED_DISP pid=5127 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.193777 kernel: audit: type=1106 audit(1768455954.171:820): pid=5127 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.193846 kernel: audit: type=1104 audit(1768455954.172:821): pid=5127 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.200636 systemd[1]: sshd@16-10.0.0.123:22-10.0.0.1:55196.service: Deactivated successfully. Jan 15 05:45:54.205936 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 05:45:54.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.123:22-10.0.0.1:55196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:54.210394 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Jan 15 05:45:54.223820 systemd[1]: Started sshd@17-10.0.0.123:22-10.0.0.1:55206.service - OpenSSH per-connection server daemon (10.0.0.1:55206). Jan 15 05:45:54.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.123:22-10.0.0.1:55206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:54.227110 systemd-logind[1578]: Removed session 18. Jan 15 05:45:54.333040 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 55206 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:54.331000 audit[5145]: USER_ACCT pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.333000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.334000 audit[5145]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee1a66a60 a2=3 a3=0 items=0 ppid=1 pid=5145 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:54.334000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:54.337248 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:54.346623 systemd-logind[1578]: New session 19 of user core. Jan 15 05:45:54.352950 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 05:45:54.356000 audit[5145]: USER_START pid=5145 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.360000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.530189 kubelet[2755]: E0115 05:45:54.530135 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:45:54.801171 sshd[5149]: Connection closed by 10.0.0.1 port 55206 Jan 15 05:45:54.800134 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:54.806000 audit[5145]: USER_END pid=5145 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.806000 audit[5145]: CRED_DISP pid=5145 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.811954 systemd[1]: sshd@17-10.0.0.123:22-10.0.0.1:55206.service: Deactivated successfully. Jan 15 05:45:54.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.123:22-10.0.0.1:55206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:54.819185 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 05:45:54.822840 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Jan 15 05:45:54.825193 systemd[1]: Started sshd@18-10.0.0.123:22-10.0.0.1:55214.service - OpenSSH per-connection server daemon (10.0.0.1:55214). Jan 15 05:45:54.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.123:22-10.0.0.1:55214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:54.827936 systemd-logind[1578]: Removed session 19. Jan 15 05:45:54.921000 audit[5162]: USER_ACCT pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.923694 sshd[5162]: Accepted publickey for core from 10.0.0.1 port 55214 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:54.924000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.924000 audit[5162]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc1b32ac0 a2=3 a3=0 items=0 ppid=1 pid=5162 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:54.924000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:54.926884 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:54.934633 systemd-logind[1578]: New session 20 of user core. Jan 15 05:45:54.944688 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 05:45:54.946000 audit[5162]: USER_START pid=5162 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:54.949000 audit[5166]: CRED_ACQ pid=5166 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:55.623000 audit[5181]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:45:55.623000 audit[5181]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdb8dcec00 a2=0 a3=7ffdb8dcebec items=0 ppid=2864 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:55.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:45:55.626384 sshd[5166]: Connection closed by 10.0.0.1 port 55214 Jan 15 05:45:55.627782 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:55.628000 audit[5181]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:45:55.628000 audit[5181]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb8dcec00 a2=0 a3=0 items=0 ppid=2864 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:55.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:45:55.630000 audit[5162]: USER_END pid=5162 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:55.630000 audit[5162]: CRED_DISP pid=5162 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:55.641994 systemd[1]: sshd@18-10.0.0.123:22-10.0.0.1:55214.service: Deactivated successfully. Jan 15 05:45:55.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.123:22-10.0.0.1:55214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:55.648845 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 05:45:55.651858 systemd-logind[1578]: Session 20 logged out. Waiting for processes to exit. Jan 15 05:45:55.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.123:22-10.0.0.1:55226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:55.659117 systemd[1]: Started sshd@19-10.0.0.123:22-10.0.0.1:55226.service - OpenSSH per-connection server daemon (10.0.0.1:55226). Jan 15 05:45:55.663191 systemd-logind[1578]: Removed session 20. Jan 15 05:45:55.660000 audit[5187]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:45:55.660000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe1a97d7e0 a2=0 a3=7ffe1a97d7cc items=0 ppid=2864 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:55.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:45:55.682000 audit[5187]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:45:55.682000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe1a97d7e0 a2=0 a3=0 items=0 ppid=2864 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:55.682000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:45:55.760000 audit[5188]: USER_ACCT pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:55.762546 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 55226 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:55.765000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:55.765000 audit[5188]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbbc28820 a2=3 a3=0 items=0 ppid=1 pid=5188 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:55.765000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:55.768184 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:55.779983 systemd-logind[1578]: New session 21 of user core. Jan 15 05:45:55.788757 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 05:45:55.793000 audit[5188]: USER_START pid=5188 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:55.796000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.062595 sshd[5192]: Connection closed by 10.0.0.1 port 55226 Jan 15 05:45:56.063773 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:56.064000 audit[5188]: USER_END pid=5188 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.064000 audit[5188]: CRED_DISP pid=5188 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.080213 systemd[1]: sshd@19-10.0.0.123:22-10.0.0.1:55226.service: Deactivated successfully. Jan 15 05:45:56.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.123:22-10.0.0.1:55226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:56.083078 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 05:45:56.087767 systemd-logind[1578]: Session 21 logged out. Waiting for processes to exit. Jan 15 05:45:56.094651 systemd[1]: Started sshd@20-10.0.0.123:22-10.0.0.1:55238.service - OpenSSH per-connection server daemon (10.0.0.1:55238). Jan 15 05:45:56.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.123:22-10.0.0.1:55238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:56.096322 systemd-logind[1578]: Removed session 21. Jan 15 05:45:56.185000 audit[5203]: USER_ACCT pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.188164 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 55238 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:45:56.187000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.188000 audit[5203]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc06b14cb0 a2=3 a3=0 items=0 ppid=1 pid=5203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:45:56.188000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:45:56.191979 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:45:56.202403 systemd-logind[1578]: New session 22 of user core. Jan 15 05:45:56.207944 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 05:45:56.211000 audit[5203]: USER_START pid=5203 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.216000 audit[5207]: CRED_ACQ pid=5207 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.341752 sshd[5207]: Connection closed by 10.0.0.1 port 55238 Jan 15 05:45:56.342139 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Jan 15 05:45:56.343000 audit[5203]: USER_END pid=5203 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.343000 audit[5203]: CRED_DISP pid=5203 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:45:56.349381 systemd[1]: sshd@20-10.0.0.123:22-10.0.0.1:55238.service: Deactivated successfully. Jan 15 05:45:56.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.123:22-10.0.0.1:55238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:45:56.352264 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 05:45:56.354329 systemd-logind[1578]: Session 22 logged out. Waiting for processes to exit. Jan 15 05:45:56.357152 systemd-logind[1578]: Removed session 22. Jan 15 05:45:57.523679 kubelet[2755]: E0115 05:45:57.523566 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:46:00.522928 kubelet[2755]: E0115 05:46:00.522816 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:46:01.361561 systemd[1]: Started sshd@21-10.0.0.123:22-10.0.0.1:55244.service - OpenSSH per-connection server daemon (10.0.0.1:55244). Jan 15 05:46:01.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.123:22-10.0.0.1:55244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:01.365170 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 15 05:46:01.365717 kernel: audit: type=1130 audit(1768455961.360:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.123:22-10.0.0.1:55244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:01.437000 audit[5245]: USER_ACCT pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.442813 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:46:01.444176 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 55244 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:46:01.449919 systemd-logind[1578]: New session 23 of user core. Jan 15 05:46:01.440000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.467258 kernel: audit: type=1101 audit(1768455961.437:864): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.467701 kernel: audit: type=1103 audit(1768455961.440:865): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.467821 kernel: audit: type=1006 audit(1768455961.440:866): pid=5245 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 15 05:46:01.440000 audit[5245]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1dc61da0 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:01.493720 kernel: audit: type=1300 audit(1768455961.440:866): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1dc61da0 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:01.493867 kernel: audit: type=1327 audit(1768455961.440:866): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:46:01.440000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:46:01.504188 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 05:46:01.507000 audit[5245]: USER_START pid=5245 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.524289 kubelet[2755]: E0115 05:46:01.524164 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:46:01.529660 kubelet[2755]: E0115 05:46:01.529552 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:46:01.530000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.543718 kernel: audit: type=1105 audit(1768455961.507:867): pid=5245 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.543859 kernel: audit: type=1103 audit(1768455961.530:868): pid=5249 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.642657 sshd[5249]: Connection closed by 10.0.0.1 port 55244 Jan 15 05:46:01.644785 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Jan 15 05:46:01.644000 audit[5245]: USER_END pid=5245 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.650045 systemd-logind[1578]: Session 23 logged out. Waiting for processes to exit. Jan 15 05:46:01.651137 systemd[1]: sshd@21-10.0.0.123:22-10.0.0.1:55244.service: Deactivated successfully. Jan 15 05:46:01.656296 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 05:46:01.668182 kernel: audit: type=1106 audit(1768455961.644:869): pid=5245 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.668298 kernel: audit: type=1104 audit(1768455961.645:870): pid=5245 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.645000 audit[5245]: CRED_DISP pid=5245 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:01.659121 systemd-logind[1578]: Removed session 23. Jan 15 05:46:01.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.123:22-10.0.0.1:55244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:02.523120 kubelet[2755]: E0115 05:46:02.523060 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:46:03.349000 audit[5262]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:46:03.349000 audit[5262]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcbf46f020 a2=0 a3=7ffcbf46f00c items=0 ppid=2864 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:03.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:46:03.357000 audit[5262]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5262 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:46:03.357000 audit[5262]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcbf46f020 a2=0 a3=7ffcbf46f00c items=0 ppid=2864 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:03.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:46:03.525370 kubelet[2755]: E0115 05:46:03.524783 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b" Jan 15 05:46:04.527360 kubelet[2755]: E0115 05:46:04.527202 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8b465fb-vtc8s" podUID="99bf8dba-e773-442c-a667-c161cf0a56cd" Jan 15 05:46:06.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.123:22-10.0.0.1:36264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:06.660777 systemd[1]: Started sshd@22-10.0.0.123:22-10.0.0.1:36264.service - OpenSSH per-connection server daemon (10.0.0.1:36264). Jan 15 05:46:06.673251 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 15 05:46:06.673305 kernel: audit: type=1130 audit(1768455966.659:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.123:22-10.0.0.1:36264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:06.740000 audit[5265]: USER_ACCT pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.745170 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:46:06.745906 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 36264 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:46:06.742000 audit[5265]: CRED_ACQ pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.754172 systemd-logind[1578]: New session 24 of user core. Jan 15 05:46:06.765592 kernel: audit: type=1101 audit(1768455966.740:875): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.766243 kernel: audit: type=1103 audit(1768455966.742:876): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.766269 kernel: audit: type=1006 audit(1768455966.742:877): pid=5265 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 15 05:46:06.742000 audit[5265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4a75cea0 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:06.786518 kernel: audit: type=1300 audit(1768455966.742:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4a75cea0 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:06.791709 kernel: audit: type=1327 audit(1768455966.742:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:46:06.742000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:46:06.787831 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 05:46:06.794000 audit[5265]: USER_START pid=5265 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.796000 audit[5269]: CRED_ACQ pid=5269 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.822860 kernel: audit: type=1105 audit(1768455966.794:878): pid=5265 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.822909 kernel: audit: type=1103 audit(1768455966.796:879): pid=5269 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.900263 sshd[5269]: Connection closed by 10.0.0.1 port 36264 Jan 15 05:46:06.900710 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Jan 15 05:46:06.901000 audit[5265]: USER_END pid=5265 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.907643 systemd[1]: sshd@22-10.0.0.123:22-10.0.0.1:36264.service: Deactivated successfully. Jan 15 05:46:06.911295 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 05:46:06.914292 systemd-logind[1578]: Session 24 logged out. Waiting for processes to exit. Jan 15 05:46:06.917401 systemd-logind[1578]: Removed session 24. Jan 15 05:46:06.918096 kernel: audit: type=1106 audit(1768455966.901:880): pid=5265 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.918125 kernel: audit: type=1104 audit(1768455966.901:881): pid=5265 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.901000 audit[5265]: CRED_DISP pid=5265 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:06.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.123:22-10.0.0.1:36264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:09.523570 kubelet[2755]: E0115 05:46:09.523335 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q55xd" podUID="647d95db-9ea2-4c12-b24e-24a6a7b2ddc1" Jan 15 05:46:09.524209 kubelet[2755]: E0115 05:46:09.523640 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-k6vdp" podUID="1c7a473d-fbbd-41be-961a-cb9f606fd6ff" Jan 15 05:46:11.523088 kubelet[2755]: E0115 05:46:11.522856 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:46:11.922713 systemd[1]: Started sshd@23-10.0.0.123:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). Jan 15 05:46:11.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.123:22-10.0.0.1:36266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:11.927365 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:46:11.928640 kernel: audit: type=1130 audit(1768455971.922:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.123:22-10.0.0.1:36266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:12.003000 audit[5289]: USER_ACCT pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.005564 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:46:12.008881 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:46:12.019050 systemd-logind[1578]: New session 25 of user core. Jan 15 05:46:12.005000 audit[5289]: CRED_ACQ pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.031598 kernel: audit: type=1101 audit(1768455972.003:884): pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.031691 kernel: audit: type=1103 audit(1768455972.005:885): pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.031746 kernel: audit: type=1006 audit(1768455972.005:886): pid=5289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 15 05:46:12.005000 audit[5289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6365c020 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:12.056114 kernel: audit: type=1300 audit(1768455972.005:886): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6365c020 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:12.005000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:46:12.060816 kernel: audit: type=1327 audit(1768455972.005:886): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:46:12.064992 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 05:46:12.068000 audit[5289]: USER_START pid=5289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.085664 kernel: audit: type=1105 audit(1768455972.068:887): pid=5289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.071000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.107594 kernel: audit: type=1103 audit(1768455972.071:888): pid=5293 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.270144 sshd[5293]: Connection closed by 10.0.0.1 port 36266 Jan 15 05:46:12.270744 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Jan 15 05:46:12.270000 audit[5289]: USER_END pid=5289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.276826 systemd-logind[1578]: Session 25 logged out. Waiting for processes to exit. Jan 15 05:46:12.278387 systemd[1]: sshd@23-10.0.0.123:22-10.0.0.1:36266.service: Deactivated successfully. Jan 15 05:46:12.281951 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 05:46:12.284572 systemd-logind[1578]: Removed session 25. Jan 15 05:46:12.271000 audit[5289]: CRED_DISP pid=5289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.297207 kernel: audit: type=1106 audit(1768455972.270:889): pid=5289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.297317 kernel: audit: type=1104 audit(1768455972.271:890): pid=5289 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:12.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.123:22-10.0.0.1:36266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:14.524038 kubelet[2755]: E0115 05:46:14.523981 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nzlc6" podUID="e9ba42b2-88e3-4065-bd62-0b6bb90b29e9" Jan 15 05:46:15.523695 containerd[1603]: time="2026-01-15T05:46:15.523621500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:46:15.584259 containerd[1603]: time="2026-01-15T05:46:15.584038495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:46:15.585877 containerd[1603]: time="2026-01-15T05:46:15.585700234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:46:15.585877 containerd[1603]: time="2026-01-15T05:46:15.585813726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:46:15.586254 kubelet[2755]: E0115 05:46:15.586175 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:46:15.586254 kubelet[2755]: E0115 05:46:15.586228 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:46:15.586701 kubelet[2755]: E0115 05:46:15.586334 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x6cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-766d8cc98b-vvx8x_calico-apiserver(bad082bb-9731-4bff-b64e-9964fb68119a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:46:15.588195 kubelet[2755]: E0115 05:46:15.588138 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-766d8cc98b-vvx8x" podUID="bad082bb-9731-4bff-b64e-9964fb68119a" Jan 15 05:46:17.283386 systemd[1]: Started sshd@24-10.0.0.123:22-10.0.0.1:57136.service - OpenSSH per-connection server daemon (10.0.0.1:57136). Jan 15 05:46:17.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.123:22-10.0.0.1:57136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:17.287105 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:46:17.287217 kernel: audit: type=1130 audit(1768455977.282:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.123:22-10.0.0.1:57136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:17.359000 audit[5311]: USER_ACCT pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.361174 sshd[5311]: Accepted publickey for core from 10.0.0.1 port 57136 ssh2: RSA SHA256:rzJZ54vlZ/fHlb+C7pC7tDwWagmKhGnt/x8z7Ukuzgs Jan 15 05:46:17.364831 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:46:17.372689 systemd-logind[1578]: New session 26 of user core. Jan 15 05:46:17.361000 audit[5311]: CRED_ACQ pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.384552 kernel: audit: type=1101 audit(1768455977.359:893): pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.384617 kernel: audit: type=1103 audit(1768455977.361:894): pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.384641 kernel: audit: type=1006 audit(1768455977.361:895): pid=5311 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 15 05:46:17.361000 audit[5311]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff80e05b40 a2=3 a3=0 items=0 ppid=1 pid=5311 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:17.402139 kernel: audit: type=1300 audit(1768455977.361:895): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff80e05b40 a2=3 a3=0 items=0 ppid=1 pid=5311 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:17.402354 kernel: audit: type=1327 audit(1768455977.361:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:46:17.361000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:46:17.407832 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 05:46:17.413000 audit[5311]: USER_START pid=5311 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.415000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.438883 kernel: audit: type=1105 audit(1768455977.413:896): pid=5311 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.438957 kernel: audit: type=1103 audit(1768455977.415:897): pid=5315 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.520215 sshd[5315]: Connection closed by 10.0.0.1 port 57136 Jan 15 05:46:17.520918 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Jan 15 05:46:17.522000 audit[5311]: USER_END pid=5311 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.524892 containerd[1603]: time="2026-01-15T05:46:17.524197722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:46:17.527806 systemd[1]: sshd@24-10.0.0.123:22-10.0.0.1:57136.service: Deactivated successfully. Jan 15 05:46:17.530995 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 05:46:17.534894 systemd-logind[1578]: Session 26 logged out. Waiting for processes to exit. Jan 15 05:46:17.538212 systemd-logind[1578]: Removed session 26. Jan 15 05:46:17.542038 kernel: audit: type=1106 audit(1768455977.522:898): pid=5311 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.522000 audit[5311]: CRED_DISP pid=5311 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.554782 kernel: audit: type=1104 audit(1768455977.522:899): pid=5311 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:46:17.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.123:22-10.0.0.1:57136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:17.594532 containerd[1603]: time="2026-01-15T05:46:17.594178323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:46:17.596674 containerd[1603]: time="2026-01-15T05:46:17.596601290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:46:17.596736 containerd[1603]: time="2026-01-15T05:46:17.596685880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:46:17.597129 kubelet[2755]: E0115 05:46:17.597008 2755 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:46:17.597129 kubelet[2755]: E0115 05:46:17.597096 2755 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:46:17.597813 kubelet[2755]: E0115 05:46:17.597285 2755 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgxmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c8cc478f5-z859x_calico-system(c9216aea-9a46-4a8f-81a9-8d30cdf7722b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:46:17.598839 kubelet[2755]: E0115 05:46:17.598703 2755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c8cc478f5-z859x" podUID="c9216aea-9a46-4a8f-81a9-8d30cdf7722b"