Jan 28 04:10:30.569775 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:22:24 -00 2026 Jan 28 04:10:30.569832 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 04:10:30.569848 kernel: BIOS-provided physical RAM map: Jan 28 04:10:30.569859 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 04:10:30.569883 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 04:10:30.569895 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 04:10:30.569907 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 28 04:10:30.570057 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 28 04:10:30.570071 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 04:10:30.570082 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 04:10:30.570170 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 04:10:30.570185 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 04:10:30.570196 kernel: NX (Execute Disable) protection: active Jan 28 04:10:30.570226 kernel: APIC: Static calls initialized Jan 28 04:10:30.570239 kernel: SMBIOS 2.8 present. Jan 28 04:10:30.570252 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 28 04:10:30.570286 kernel: DMI: Memory slots populated: 1/1 Jan 28 04:10:30.570313 kernel: Hypervisor detected: KVM Jan 28 04:10:30.570326 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 04:10:30.570338 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 04:10:30.570350 kernel: kvm-clock: using sched offset of 5522758360 cycles Jan 28 04:10:30.570363 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 04:10:30.570375 kernel: tsc: Detected 2799.998 MHz processor Jan 28 04:10:30.570387 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 04:10:30.570400 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 04:10:30.570426 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 04:10:30.570439 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 04:10:30.570452 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 04:10:30.570464 kernel: Using GB pages for direct mapping Jan 28 04:10:30.570476 kernel: ACPI: Early table checksum verification disabled Jan 28 04:10:30.570489 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 28 04:10:30.570501 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 04:10:30.570513 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 04:10:30.570539 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 04:10:30.570554 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 28 04:10:30.570567 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 04:10:30.570579 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 04:10:30.570592 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 04:10:30.570604 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 04:10:30.570659 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 28 04:10:30.570701 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 28 04:10:30.570715 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 28 04:10:30.570728 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 28 04:10:30.570741 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 28 04:10:30.570767 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 28 04:10:30.570780 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 28 04:10:30.570792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 28 04:10:30.570805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 28 04:10:30.570818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 28 04:10:30.570830 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 28 04:10:30.570843 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 28 04:10:30.570868 kernel: Zone ranges: Jan 28 04:10:30.570882 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 04:10:30.570895 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 28 04:10:30.570907 kernel: Normal empty Jan 28 04:10:30.570929 kernel: Device empty Jan 28 04:10:30.570941 kernel: Movable zone start for each node Jan 28 04:10:30.570954 kernel: Early memory node ranges Jan 28 04:10:30.570966 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 04:10:30.570992 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 28 04:10:30.571025 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 28 04:10:30.571040 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 04:10:30.571068 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 04:10:30.571083 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 28 04:10:30.571124 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 04:10:30.571159 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 04:10:30.571188 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 04:10:30.571202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 04:10:30.571214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 04:10:30.571227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 04:10:30.571240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 04:10:30.571253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 04:10:30.571265 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 04:10:30.571290 kernel: TSC deadline timer available Jan 28 04:10:30.571304 kernel: CPU topo: Max. logical packages: 16 Jan 28 04:10:30.571317 kernel: CPU topo: Max. logical dies: 16 Jan 28 04:10:30.571329 kernel: CPU topo: Max. dies per package: 1 Jan 28 04:10:30.571342 kernel: CPU topo: Max. threads per core: 1 Jan 28 04:10:30.571354 kernel: CPU topo: Num. cores per package: 1 Jan 28 04:10:30.571367 kernel: CPU topo: Num. threads per package: 1 Jan 28 04:10:30.571379 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 28 04:10:30.571406 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 04:10:30.571419 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 04:10:30.571431 kernel: Booting paravirtualized kernel on KVM Jan 28 04:10:30.571444 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 04:10:30.571457 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 28 04:10:30.571470 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 28 04:10:30.571482 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 28 04:10:30.571508 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 28 04:10:30.571521 kernel: kvm-guest: PV spinlocks enabled Jan 28 04:10:30.571534 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 04:10:30.571548 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 04:10:30.571561 kernel: random: crng init done Jan 28 04:10:30.571574 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 04:10:30.571599 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 28 04:10:30.571623 kernel: Fallback order for Node 0: 0 Jan 28 04:10:30.571636 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 28 04:10:30.571649 kernel: Policy zone: DMA32 Jan 28 04:10:30.571661 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 04:10:30.571674 kernel: software IO TLB: area num 16. Jan 28 04:10:30.571687 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 28 04:10:30.571699 kernel: Kernel/User page tables isolation: enabled Jan 28 04:10:30.571727 kernel: ftrace: allocating 40128 entries in 157 pages Jan 28 04:10:30.571740 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 04:10:30.571753 kernel: Dynamic Preempt: voluntary Jan 28 04:10:30.571765 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 04:10:30.571784 kernel: rcu: RCU event tracing is enabled. Jan 28 04:10:30.571797 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 28 04:10:30.571810 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 04:10:30.571853 kernel: Rude variant of Tasks RCU enabled. Jan 28 04:10:30.571869 kernel: Tracing variant of Tasks RCU enabled. Jan 28 04:10:30.571882 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 04:10:30.571895 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 28 04:10:30.571908 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 04:10:30.571920 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 04:10:30.571933 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 04:10:30.571963 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 28 04:10:30.571977 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 04:10:30.572016 kernel: Console: colour VGA+ 80x25 Jan 28 04:10:30.572678 kernel: printk: legacy console [tty0] enabled Jan 28 04:10:30.572693 kernel: printk: legacy console [ttyS0] enabled Jan 28 04:10:30.572727 kernel: ACPI: Core revision 20240827 Jan 28 04:10:30.572742 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 04:10:30.572756 kernel: x2apic enabled Jan 28 04:10:30.572769 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 04:10:30.572798 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 28 04:10:30.572812 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 28 04:10:30.572825 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 04:10:30.572839 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 28 04:10:30.572865 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 28 04:10:30.572878 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 04:10:30.572891 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 04:10:30.572904 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 04:10:30.572917 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 28 04:10:30.572930 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 28 04:10:30.572943 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 28 04:10:30.572956 kernel: MDS: Mitigation: Clear CPU buffers Jan 28 04:10:30.572968 kernel: MMIO Stale Data: Unknown: No mitigations Jan 28 04:10:30.572981 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 28 04:10:30.573007 kernel: active return thunk: its_return_thunk Jan 28 04:10:30.573021 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 28 04:10:30.573034 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 04:10:30.573047 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 04:10:30.573060 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 04:10:30.573073 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 04:10:30.573086 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 28 04:10:30.573237 kernel: Freeing SMP alternatives memory: 32K Jan 28 04:10:30.573253 kernel: pid_max: default: 32768 minimum: 301 Jan 28 04:10:30.573266 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 04:10:30.573298 kernel: landlock: Up and running. Jan 28 04:10:30.573313 kernel: SELinux: Initializing. Jan 28 04:10:30.573326 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 04:10:30.573339 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 04:10:30.573352 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 28 04:10:30.573365 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 28 04:10:30.573378 kernel: signal: max sigframe size: 1776 Jan 28 04:10:30.573413 kernel: rcu: Hierarchical SRCU implementation. Jan 28 04:10:30.573430 kernel: rcu: Max phase no-delay instances is 400. Jan 28 04:10:30.573460 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 28 04:10:30.573475 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 04:10:30.573488 kernel: smp: Bringing up secondary CPUs ... Jan 28 04:10:30.573501 kernel: smpboot: x86: Booting SMP configuration: Jan 28 04:10:30.573515 kernel: .... node #0, CPUs: #1 Jan 28 04:10:30.573528 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 04:10:30.574544 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 28 04:10:30.574582 kernel: Memory: 1912056K/2096616K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 178544K reserved, 0K cma-reserved) Jan 28 04:10:30.574598 kernel: devtmpfs: initialized Jan 28 04:10:30.574624 kernel: x86/mm: Memory block size: 128MB Jan 28 04:10:30.574639 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 04:10:30.574652 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 28 04:10:30.574666 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 04:10:30.574679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 04:10:30.574708 kernel: audit: initializing netlink subsys (disabled) Jan 28 04:10:30.574723 kernel: audit: type=2000 audit(1769573426.947:1): state=initialized audit_enabled=0 res=1 Jan 28 04:10:30.574736 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 04:10:30.574750 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 04:10:30.574763 kernel: cpuidle: using governor menu Jan 28 04:10:30.574776 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 04:10:30.574790 kernel: dca service started, version 1.12.1 Jan 28 04:10:30.574825 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 28 04:10:30.574855 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 04:10:30.574869 kernel: PCI: Using configuration type 1 for base access Jan 28 04:10:30.574882 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 04:10:30.574896 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 04:10:30.574909 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 04:10:30.574922 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 04:10:30.574936 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 04:10:30.574964 kernel: ACPI: Added _OSI(Module Device) Jan 28 04:10:30.574978 kernel: ACPI: Added _OSI(Processor Device) Jan 28 04:10:30.574991 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 04:10:30.575005 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 04:10:30.575018 kernel: ACPI: Interpreter enabled Jan 28 04:10:30.575031 kernel: ACPI: PM: (supports S0 S5) Jan 28 04:10:30.575044 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 04:10:30.575072 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 04:10:30.575085 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 04:10:30.575120 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 04:10:30.575140 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 04:10:30.575474 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 04:10:30.575731 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 28 04:10:30.575987 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 28 04:10:30.576008 kernel: PCI host bridge to bus 0000:00 Jan 28 04:10:30.576285 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 04:10:30.576498 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 04:10:30.576723 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 04:10:30.576943 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 28 04:10:30.577226 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 04:10:30.577436 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 28 04:10:30.577660 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 04:10:30.577927 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 04:10:30.578251 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 28 04:10:30.578532 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 28 04:10:30.578775 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 28 04:10:30.579021 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 28 04:10:30.579278 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 04:10:30.579538 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 04:10:30.579811 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 28 04:10:30.580066 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 04:10:30.580475 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 04:10:30.583328 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 04:10:30.583583 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 04:10:30.583834 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 28 04:10:30.584100 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 04:10:30.584391 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 04:10:30.584632 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 04:10:30.584876 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 04:10:30.585259 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 28 04:10:30.585495 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 04:10:30.585764 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 04:10:30.585992 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 04:10:30.588347 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 04:10:30.588589 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 28 04:10:30.588836 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 04:10:30.589137 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 04:10:30.589395 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 04:10:30.589652 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 04:10:30.589931 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 28 04:10:30.590194 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 04:10:30.590424 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 04:10:30.590666 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 04:10:30.590972 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 04:10:30.591250 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 28 04:10:30.591477 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 04:10:30.591718 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 04:10:30.591965 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 04:10:30.592222 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 04:10:30.592472 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 28 04:10:30.592715 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 04:10:30.593001 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 04:10:30.594814 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 04:10:30.595086 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 04:10:30.595375 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 28 04:10:30.595605 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 04:10:30.595871 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 04:10:30.596471 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 04:10:30.596798 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 04:10:30.597034 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 28 04:10:30.597314 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 28 04:10:30.597544 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 28 04:10:30.597811 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 28 04:10:30.598059 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 04:10:30.599427 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 28 04:10:30.599713 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 28 04:10:30.599974 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 28 04:10:30.600237 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 04:10:30.600471 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 04:10:30.600824 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 04:10:30.601056 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 28 04:10:30.603390 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 28 04:10:30.603656 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 04:10:30.603887 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 28 04:10:30.604158 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 28 04:10:30.604450 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 28 04:10:30.604699 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 04:10:30.604957 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 04:10:30.606557 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 04:10:30.606856 kernel: pci_bus 0000:02: extended config space not accessible Jan 28 04:10:30.607131 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 28 04:10:30.607420 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 28 04:10:30.607713 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 04:10:30.607958 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 28 04:10:30.608292 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 28 04:10:30.608534 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 04:10:30.608829 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 28 04:10:30.609063 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 28 04:10:30.609333 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 04:10:30.609582 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 04:10:30.609825 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 04:10:30.610071 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 04:10:30.610347 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 04:10:30.610576 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 04:10:30.610627 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 04:10:30.610643 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 04:10:30.610656 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 04:10:30.610669 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 04:10:30.610715 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 04:10:30.610751 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 04:10:30.610781 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 04:10:30.610795 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 04:10:30.610809 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 04:10:30.610822 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 04:10:30.610836 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 04:10:30.610850 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 04:10:30.610863 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 04:10:30.610890 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 04:10:30.610905 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 04:10:30.610918 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 04:10:30.610931 kernel: iommu: Default domain type: Translated Jan 28 04:10:30.610945 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 04:10:30.610958 kernel: PCI: Using ACPI for IRQ routing Jan 28 04:10:30.610972 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 04:10:30.611000 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 04:10:30.611031 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 28 04:10:30.611280 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 04:10:30.611524 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 04:10:30.611776 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 04:10:30.611798 kernel: vgaarb: loaded Jan 28 04:10:30.611812 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 04:10:30.611845 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 04:10:30.611860 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 04:10:30.611873 kernel: pnp: PnP ACPI init Jan 28 04:10:30.612206 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 04:10:30.612229 kernel: pnp: PnP ACPI: found 5 devices Jan 28 04:10:30.612243 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 04:10:30.612257 kernel: NET: Registered PF_INET protocol family Jan 28 04:10:30.612289 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 04:10:30.612303 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 28 04:10:30.612317 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 04:10:30.612330 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 04:10:30.612344 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 28 04:10:30.612358 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 28 04:10:30.612371 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 04:10:30.612400 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 04:10:30.612414 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 04:10:30.612427 kernel: NET: Registered PF_XDP protocol family Jan 28 04:10:30.612669 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 28 04:10:30.612898 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 28 04:10:30.613163 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 28 04:10:30.613414 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 28 04:10:30.613685 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 28 04:10:30.613931 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 28 04:10:30.614176 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 28 04:10:30.614430 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 28 04:10:30.614727 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 28 04:10:30.614998 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 28 04:10:30.615267 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 28 04:10:30.615501 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 28 04:10:30.615749 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 28 04:10:30.616053 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 28 04:10:30.616302 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 28 04:10:30.616526 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 28 04:10:30.616812 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 04:10:30.617146 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 04:10:30.617373 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 04:10:30.617666 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 28 04:10:30.617923 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 04:10:30.618206 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 04:10:30.618432 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 04:10:30.618710 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 28 04:10:30.618938 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 04:10:30.619189 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 04:10:30.619413 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 04:10:30.619685 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 28 04:10:30.619914 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 04:10:30.620184 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 04:10:30.620409 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 04:10:30.620699 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 28 04:10:30.620925 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 04:10:30.621167 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 04:10:30.621477 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 04:10:30.621875 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 28 04:10:30.622122 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 04:10:30.622348 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 04:10:30.622592 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 04:10:30.622850 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 28 04:10:30.623097 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 04:10:30.623339 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 04:10:30.623637 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 04:10:30.623865 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 28 04:10:30.624088 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 04:10:30.624352 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 04:10:30.624577 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 04:10:30.624834 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 28 04:10:30.625057 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 04:10:30.625300 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 04:10:30.625535 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 04:10:30.625759 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 04:10:30.625966 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 04:10:30.626228 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 28 04:10:30.626476 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 04:10:30.626699 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 28 04:10:30.626926 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 28 04:10:30.627193 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 28 04:10:30.627466 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 04:10:30.627730 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 28 04:10:30.627956 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 28 04:10:30.628194 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 28 04:10:30.628407 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 04:10:30.628652 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 28 04:10:30.628867 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 28 04:10:30.629141 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 04:10:30.629388 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 28 04:10:30.629602 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 28 04:10:30.629846 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 04:10:30.630109 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 28 04:10:30.630386 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 28 04:10:30.630601 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 04:10:30.630839 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 28 04:10:30.631134 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 28 04:10:30.631379 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 04:10:30.631618 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 28 04:10:30.631855 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 28 04:10:30.632115 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 04:10:30.632341 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 28 04:10:30.632556 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 28 04:10:30.632785 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 04:10:30.632807 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 04:10:30.632853 kernel: PCI: CLS 0 bytes, default 64 Jan 28 04:10:30.632869 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 28 04:10:30.632884 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 28 04:10:30.632898 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 28 04:10:30.632912 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 28 04:10:30.632926 kernel: Initialise system trusted keyrings Jan 28 04:10:30.632940 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 28 04:10:30.632976 kernel: Key type asymmetric registered Jan 28 04:10:30.632990 kernel: Asymmetric key parser 'x509' registered Jan 28 04:10:30.633004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 04:10:30.633024 kernel: io scheduler mq-deadline registered Jan 28 04:10:30.633038 kernel: io scheduler kyber registered Jan 28 04:10:30.633052 kernel: io scheduler bfq registered Jan 28 04:10:30.633304 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 28 04:10:30.633565 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 28 04:10:30.633808 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 04:10:30.634036 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 28 04:10:30.634298 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 28 04:10:30.634554 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 04:10:30.634836 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 28 04:10:30.635061 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 28 04:10:30.635305 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 04:10:30.635542 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 28 04:10:30.635782 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 28 04:10:30.636029 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 04:10:30.636285 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 28 04:10:30.636511 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 28 04:10:30.636759 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 04:10:30.636985 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 28 04:10:30.637252 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 28 04:10:30.637476 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 04:10:30.637748 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 28 04:10:30.637972 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 28 04:10:30.638217 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 04:10:30.638480 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 28 04:10:30.638729 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 28 04:10:30.638955 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 04:10:30.638977 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 04:10:30.638992 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 04:10:30.639015 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 04:10:30.639049 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 04:10:30.639072 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 04:10:30.639086 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 04:10:30.639118 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 04:10:30.639132 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 04:10:30.639366 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 28 04:10:30.639388 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 04:10:30.639637 kernel: rtc_cmos 00:03: registered as rtc0 Jan 28 04:10:30.639857 kernel: rtc_cmos 00:03: setting system clock to 2026-01-28T04:10:28 UTC (1769573428) Jan 28 04:10:30.640092 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 28 04:10:30.640112 kernel: intel_pstate: CPU model not supported Jan 28 04:10:30.640147 kernel: NET: Registered PF_INET6 protocol family Jan 28 04:10:30.640162 kernel: Segment Routing with IPv6 Jan 28 04:10:30.640194 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 04:10:30.640209 kernel: NET: Registered PF_PACKET protocol family Jan 28 04:10:30.640224 kernel: Key type dns_resolver registered Jan 28 04:10:30.640238 kernel: IPI shorthand broadcast: enabled Jan 28 04:10:30.640251 kernel: sched_clock: Marking stable (2342004243, 210902391)->(2683690562, -130783928) Jan 28 04:10:30.640265 kernel: registered taskstats version 1 Jan 28 04:10:30.640279 kernel: Loading compiled-in X.509 certificates Jan 28 04:10:30.640293 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 0eb3c2aae9988d4ab7f0e142c4f5c61453c9ddb3' Jan 28 04:10:30.640322 kernel: Demotion targets for Node 0: null Jan 28 04:10:30.640337 kernel: Key type .fscrypt registered Jan 28 04:10:30.640351 kernel: Key type fscrypt-provisioning registered Jan 28 04:10:30.640365 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 04:10:30.640379 kernel: ima: Allocated hash algorithm: sha1 Jan 28 04:10:30.640393 kernel: ima: No architecture policies found Jan 28 04:10:30.640407 kernel: clk: Disabling unused clocks Jan 28 04:10:30.640436 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 28 04:10:30.640450 kernel: Write protecting the kernel read-only data: 47104k Jan 28 04:10:30.640464 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 28 04:10:30.640478 kernel: Run /init as init process Jan 28 04:10:30.640492 kernel: with arguments: Jan 28 04:10:30.640506 kernel: /init Jan 28 04:10:30.640520 kernel: with environment: Jan 28 04:10:30.640547 kernel: HOME=/ Jan 28 04:10:30.640561 kernel: TERM=linux Jan 28 04:10:30.640575 kernel: ACPI: bus type USB registered Jan 28 04:10:30.640590 kernel: usbcore: registered new interface driver usbfs Jan 28 04:10:30.640603 kernel: usbcore: registered new interface driver hub Jan 28 04:10:30.640629 kernel: usbcore: registered new device driver usb Jan 28 04:10:30.640867 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 04:10:30.641144 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 28 04:10:30.641377 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 28 04:10:30.641606 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 04:10:30.641852 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 28 04:10:30.642082 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 28 04:10:30.642404 kernel: hub 1-0:1.0: USB hub found Jan 28 04:10:30.642695 kernel: hub 1-0:1.0: 4 ports detected Jan 28 04:10:30.642957 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 28 04:10:30.643280 kernel: hub 2-0:1.0: USB hub found Jan 28 04:10:30.643539 kernel: hub 2-0:1.0: 4 ports detected Jan 28 04:10:30.643559 kernel: SCSI subsystem initialized Jan 28 04:10:30.643573 kernel: libata version 3.00 loaded. Jan 28 04:10:30.643852 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 04:10:30.643875 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 04:10:30.644111 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 04:10:30.644364 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 04:10:30.644633 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 04:10:30.644889 kernel: scsi host0: ahci Jan 28 04:10:30.645177 kernel: scsi host1: ahci Jan 28 04:10:30.645486 kernel: scsi host2: ahci Jan 28 04:10:30.645755 kernel: scsi host3: ahci Jan 28 04:10:30.645997 kernel: scsi host4: ahci Jan 28 04:10:30.646285 kernel: scsi host5: ahci Jan 28 04:10:30.646330 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Jan 28 04:10:30.646345 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Jan 28 04:10:30.646360 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Jan 28 04:10:30.646382 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Jan 28 04:10:30.646397 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Jan 28 04:10:30.646411 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Jan 28 04:10:30.646703 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 28 04:10:30.646727 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 04:10:30.646742 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 04:10:30.646756 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 04:10:30.646770 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 04:10:30.646784 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 04:10:30.646798 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 28 04:10:30.646831 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 04:10:30.646845 kernel: usbcore: registered new interface driver usbhid Jan 28 04:10:30.647134 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 28 04:10:30.647170 kernel: usbhid: USB HID core driver Jan 28 04:10:30.647388 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 28 04:10:30.647410 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 28 04:10:30.647444 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 04:10:30.647752 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 28 04:10:30.647776 kernel: GPT:25804799 != 125829119 Jan 28 04:10:30.647790 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 04:10:30.647803 kernel: GPT:25804799 != 125829119 Jan 28 04:10:30.647816 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 04:10:30.647849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 04:10:30.647865 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 04:10:30.647879 kernel: device-mapper: uevent: version 1.0.3 Jan 28 04:10:30.647893 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 04:10:30.647907 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 04:10:30.647921 kernel: raid6: sse2x4 gen() 14236 MB/s Jan 28 04:10:30.647935 kernel: raid6: sse2x2 gen() 9716 MB/s Jan 28 04:10:30.647964 kernel: raid6: sse2x1 gen() 9647 MB/s Jan 28 04:10:30.647978 kernel: raid6: using algorithm sse2x4 gen() 14236 MB/s Jan 28 04:10:30.647992 kernel: raid6: .... xor() 7867 MB/s, rmw enabled Jan 28 04:10:30.648006 kernel: raid6: using ssse3x2 recovery algorithm Jan 28 04:10:30.648020 kernel: xor: automatically using best checksumming function avx Jan 28 04:10:30.648046 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 04:10:30.648060 kernel: BTRFS: device fsid 0f5fa021-4357-40bb-b32a-e1579c5824ad devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (193) Jan 28 04:10:30.648087 kernel: BTRFS info (device dm-0): first mount of filesystem 0f5fa021-4357-40bb-b32a-e1579c5824ad Jan 28 04:10:30.648131 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 04:10:30.648145 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 04:10:30.648158 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 04:10:30.648184 kernel: loop: module loaded Jan 28 04:10:30.648197 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 04:10:30.648220 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 04:10:30.648251 systemd[1]: Successfully made /usr/ read-only. Jan 28 04:10:30.648271 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 04:10:30.648286 systemd[1]: Detected virtualization kvm. Jan 28 04:10:30.648305 systemd[1]: Detected architecture x86-64. Jan 28 04:10:30.648319 systemd[1]: Running in initrd. Jan 28 04:10:30.648333 systemd[1]: No hostname configured, using default hostname. Jan 28 04:10:30.648362 systemd[1]: Hostname set to . Jan 28 04:10:30.648377 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 04:10:30.648391 systemd[1]: Queued start job for default target initrd.target. Jan 28 04:10:30.648405 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 04:10:30.648432 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 04:10:30.648447 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 04:10:30.648478 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 04:10:30.648495 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 04:10:30.648523 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 04:10:30.648538 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 04:10:30.648552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 04:10:30.648567 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 04:10:30.648616 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 04:10:30.648634 systemd[1]: Reached target paths.target - Path Units. Jan 28 04:10:30.648649 systemd[1]: Reached target slices.target - Slice Units. Jan 28 04:10:30.648664 systemd[1]: Reached target swap.target - Swaps. Jan 28 04:10:30.648679 systemd[1]: Reached target timers.target - Timer Units. Jan 28 04:10:30.648694 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 04:10:30.648708 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 04:10:30.648740 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 04:10:30.648756 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 04:10:30.648770 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 04:10:30.648785 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 04:10:30.648800 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 04:10:30.648815 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 04:10:30.648830 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 04:10:30.648859 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 04:10:30.648875 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 04:10:30.648890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 04:10:30.648904 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 04:10:30.648920 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 04:10:30.648935 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 04:10:30.648963 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 04:10:30.648979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 04:10:30.648995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 04:10:30.649010 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 04:10:30.649038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 04:10:30.649054 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 04:10:30.649069 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 04:10:30.649144 systemd-journald[331]: Collecting audit messages is enabled. Jan 28 04:10:30.649196 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 04:10:30.649230 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 04:10:30.649244 kernel: Bridge firewalling registered Jan 28 04:10:30.649259 systemd-journald[331]: Journal started Jan 28 04:10:30.649319 systemd-journald[331]: Runtime Journal (/run/log/journal/65269eb336c54b659150972d0ebcffd6) is 4.7M, max 37.7M, 33M free. Jan 28 04:10:30.630308 systemd-modules-load[333]: Inserted module 'br_netfilter' Jan 28 04:10:30.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.677558 kernel: audit: type=1130 audit(1769573430.675:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.681558 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 04:10:30.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.689151 kernel: audit: type=1130 audit(1769573430.683:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.690285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 04:10:30.696499 kernel: audit: type=1130 audit(1769573430.690:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.691651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 04:10:30.703127 kernel: audit: type=1130 audit(1769573430.697:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.702074 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 04:10:30.705817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 04:10:30.709310 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 04:10:30.714862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 04:10:30.738414 systemd-tmpfiles[351]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 04:10:30.740575 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 04:10:30.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.753177 kernel: audit: type=1130 audit(1769573430.743:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.753216 kernel: audit: type=1334 audit(1769573430.744:7): prog-id=6 op=LOAD Jan 28 04:10:30.744000 audit: BPF prog-id=6 op=LOAD Jan 28 04:10:30.752209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 04:10:30.756268 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 04:10:30.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.762801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 04:10:30.769614 kernel: audit: type=1130 audit(1769573430.756:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.769649 kernel: audit: type=1130 audit(1769573430.763:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.769204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 04:10:30.776331 kernel: audit: type=1130 audit(1769573430.770:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.777135 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 04:10:30.803782 dracut-cmdline[371]: dracut-109 Jan 28 04:10:30.810309 dracut-cmdline[371]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 04:10:30.842162 systemd-resolved[366]: Positive Trust Anchors: Jan 28 04:10:30.842181 systemd-resolved[366]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 04:10:30.842187 systemd-resolved[366]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 04:10:30.842229 systemd-resolved[366]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 04:10:30.878021 systemd-resolved[366]: Defaulting to hostname 'linux'. Jan 28 04:10:30.880775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 04:10:30.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:30.883858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 04:10:30.962126 kernel: Loading iSCSI transport class v2.0-870. Jan 28 04:10:30.982168 kernel: iscsi: registered transport (tcp) Jan 28 04:10:31.011131 kernel: iscsi: registered transport (qla4xxx) Jan 28 04:10:31.013205 kernel: QLogic iSCSI HBA Driver Jan 28 04:10:31.048844 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 04:10:31.075787 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 04:10:31.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.079009 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 04:10:31.143576 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 04:10:31.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.146618 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 04:10:31.149323 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 04:10:31.193250 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 04:10:31.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.195000 audit: BPF prog-id=7 op=LOAD Jan 28 04:10:31.195000 audit: BPF prog-id=8 op=LOAD Jan 28 04:10:31.197314 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 04:10:31.237368 systemd-udevd[598]: Using default interface naming scheme 'v257'. Jan 28 04:10:31.258441 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 04:10:31.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.263544 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 04:10:31.299786 dracut-pre-trigger[671]: rd.md=0: removing MD RAID activation Jan 28 04:10:31.305012 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 04:10:31.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.308000 audit: BPF prog-id=9 op=LOAD Jan 28 04:10:31.309539 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 04:10:31.337867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 04:10:31.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.341326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 04:10:31.376640 systemd-networkd[714]: lo: Link UP Jan 28 04:10:31.377539 systemd-networkd[714]: lo: Gained carrier Jan 28 04:10:31.379590 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 04:10:31.386874 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 28 04:10:31.386905 kernel: audit: type=1130 audit(1769573431.380:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.381235 systemd[1]: Reached target network.target - Network. Jan 28 04:10:31.500300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 04:10:31.510362 kernel: audit: type=1130 audit(1769573431.501:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.505027 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 04:10:31.668246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 04:10:31.683337 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 04:10:31.715521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 04:10:31.728344 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 04:10:31.731498 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 04:10:31.763494 disk-uuid[767]: Primary Header is updated. Jan 28 04:10:31.763494 disk-uuid[767]: Secondary Entries is updated. Jan 28 04:10:31.763494 disk-uuid[767]: Secondary Header is updated. Jan 28 04:10:31.772908 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 04:10:31.797114 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 04:10:31.801116 kernel: AES CTR mode by8 optimization enabled Jan 28 04:10:31.875076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 04:10:31.875286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 04:10:31.887823 kernel: audit: type=1131 audit(1769573431.877:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:31.877219 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 04:10:31.887408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 04:10:31.894636 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 04:10:31.894650 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 04:10:31.896532 systemd-networkd[714]: eth0: Link UP Jan 28 04:10:31.896848 systemd-networkd[714]: eth0: Gained carrier Jan 28 04:10:31.896863 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 04:10:31.911821 systemd-networkd[714]: eth0: DHCPv4 address 10.230.66.102/30, gateway 10.230.66.101 acquired from 10.230.66.101 Jan 28 04:10:32.057150 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 04:10:32.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.065476 kernel: audit: type=1130 audit(1769573432.059:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.065049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 04:10:32.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.068484 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 04:10:32.073218 kernel: audit: type=1130 audit(1769573432.066:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.073524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 04:10:32.074266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 04:10:32.077254 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 04:10:32.099703 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 04:10:32.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.106146 kernel: audit: type=1130 audit(1769573432.101:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.866567 disk-uuid[768]: Warning: The kernel is still using the old partition table. Jan 28 04:10:32.866567 disk-uuid[768]: The new table will be used at the next reboot or after you Jan 28 04:10:32.866567 disk-uuid[768]: run partprobe(8) or kpartx(8) Jan 28 04:10:32.866567 disk-uuid[768]: The operation has completed successfully. Jan 28 04:10:32.876071 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 04:10:32.876340 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 04:10:32.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.880302 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 04:10:32.894944 kernel: audit: type=1130 audit(1769573432.877:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.894996 kernel: audit: type=1131 audit(1769573432.877:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.929142 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (853) Jan 28 04:10:32.934035 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 04:10:32.934174 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 04:10:32.939967 kernel: BTRFS info (device vda6): turning on async discard Jan 28 04:10:32.940011 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 04:10:32.949143 kernel: BTRFS info (device vda6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 04:10:32.950507 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 04:10:32.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:32.954440 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 04:10:32.959367 kernel: audit: type=1130 audit(1769573432.951:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:33.150423 systemd-networkd[714]: eth0: Gained IPv6LL Jan 28 04:10:33.245310 ignition[872]: Ignition 2.24.0 Jan 28 04:10:33.246442 ignition[872]: Stage: fetch-offline Jan 28 04:10:33.247209 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jan 28 04:10:33.247233 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 04:10:33.247418 ignition[872]: parsed url from cmdline: "" Jan 28 04:10:33.247425 ignition[872]: no config URL provided Jan 28 04:10:33.247435 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 04:10:33.251953 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 04:10:33.247455 ignition[872]: no config at "/usr/lib/ignition/user.ign" Jan 28 04:10:33.259332 kernel: audit: type=1130 audit(1769573433.253:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:33.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:33.255754 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 04:10:33.247464 ignition[872]: failed to fetch config: resource requires networking Jan 28 04:10:33.249876 ignition[872]: Ignition finished successfully Jan 28 04:10:33.288261 ignition[878]: Ignition 2.24.0 Jan 28 04:10:33.288279 ignition[878]: Stage: fetch Jan 28 04:10:33.288572 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 28 04:10:33.288594 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 04:10:33.288795 ignition[878]: parsed url from cmdline: "" Jan 28 04:10:33.288802 ignition[878]: no config URL provided Jan 28 04:10:33.288831 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 04:10:33.288848 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 28 04:10:33.291297 ignition[878]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 28 04:10:33.291330 ignition[878]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 28 04:10:33.291399 ignition[878]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 28 04:10:33.308903 ignition[878]: GET result: OK Jan 28 04:10:33.309537 ignition[878]: parsing config with SHA512: 172cadf5a473e39051983f5a3f7fbfdbbab61ca32f25e5468a500f2e7b949a7dc9ceab487e48a0f86e216602960e6b5ce69329fc319f36410464278ffcc49f94 Jan 28 04:10:33.322310 unknown[878]: fetched base config from "system" Jan 28 04:10:33.322791 ignition[878]: fetch: fetch complete Jan 28 04:10:33.322327 unknown[878]: fetched base config from "system" Jan 28 04:10:33.322799 ignition[878]: fetch: fetch passed Jan 28 04:10:33.322337 unknown[878]: fetched user config from "openstack" Jan 28 04:10:33.322867 ignition[878]: Ignition finished successfully Jan 28 04:10:33.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:33.326791 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 04:10:33.329537 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 04:10:33.360259 ignition[884]: Ignition 2.24.0 Jan 28 04:10:33.361303 ignition[884]: Stage: kargs Jan 28 04:10:33.362080 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 28 04:10:33.362808 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 04:10:33.363737 ignition[884]: kargs: kargs passed Jan 28 04:10:33.363811 ignition[884]: Ignition finished successfully Jan 28 04:10:33.366191 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 04:10:33.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:33.369458 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 04:10:33.397416 ignition[890]: Ignition 2.24.0 Jan 28 04:10:33.397450 ignition[890]: Stage: disks Jan 28 04:10:33.397710 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 28 04:10:33.400548 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 04:10:33.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:33.397730 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 04:10:33.402050 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 04:10:33.398981 ignition[890]: disks: disks passed Jan 28 04:10:33.403046 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 04:10:33.399050 ignition[890]: Ignition finished successfully Jan 28 04:10:33.404688 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 04:10:33.406134 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 04:10:33.407318 systemd[1]: Reached target basic.target - Basic System. Jan 28 04:10:33.411271 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 04:10:33.471310 systemd-fsck[898]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 28 04:10:33.475156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 04:10:33.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:33.477354 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 04:10:33.613110 kernel: EXT4-fs (vda9): mounted filesystem 60a46795-cc10-4076-a709-d039d1c23a6b r/w with ordered data mode. Quota mode: none. Jan 28 04:10:33.614026 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 04:10:33.615423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 04:10:33.618205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 04:10:33.620071 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 04:10:33.623357 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 04:10:33.626302 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 28 04:10:33.627063 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 04:10:33.629145 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 04:10:33.638236 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 04:10:33.642191 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 04:10:33.667016 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (906) Jan 28 04:10:33.671978 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 04:10:33.672031 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 04:10:33.683268 kernel: BTRFS info (device vda6): turning on async discard Jan 28 04:10:33.683353 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 04:10:33.696480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 04:10:33.765124 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:33.973217 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 04:10:33.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:33.977432 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 04:10:33.980360 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 04:10:34.008314 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 04:10:34.011886 kernel: BTRFS info (device vda6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 04:10:34.016300 systemd-networkd[714]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9099:24:19ff:fee6:4266/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9099:24:19ff:fee6:4266/64 assigned by NDisc. Jan 28 04:10:34.016311 systemd-networkd[714]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 04:10:34.061959 ignition[1008]: INFO : Ignition 2.24.0 Jan 28 04:10:34.064252 ignition[1008]: INFO : Stage: mount Jan 28 04:10:34.064252 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 04:10:34.064252 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 04:10:34.067214 ignition[1008]: INFO : mount: mount passed Jan 28 04:10:34.067214 ignition[1008]: INFO : Ignition finished successfully Jan 28 04:10:34.070470 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 04:10:34.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:34.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:34.082793 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 04:10:34.799120 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:36.809157 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:40.822190 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:40.830564 coreos-metadata[908]: Jan 28 04:10:40.830 WARN failed to locate config-drive, using the metadata service API instead Jan 28 04:10:40.855265 coreos-metadata[908]: Jan 28 04:10:40.855 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 04:10:40.869332 coreos-metadata[908]: Jan 28 04:10:40.869 INFO Fetch successful Jan 28 04:10:40.870156 coreos-metadata[908]: Jan 28 04:10:40.869 INFO wrote hostname srv-3avyi.gb1.brightbox.com to /sysroot/etc/hostname Jan 28 04:10:40.871737 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 28 04:10:40.885998 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 04:10:40.886044 kernel: audit: type=1130 audit(1769573440.874:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:40.886072 kernel: audit: type=1131 audit(1769573440.874:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:40.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:40.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:40.871953 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 28 04:10:40.877175 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 04:10:40.906544 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 04:10:40.936182 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Jan 28 04:10:40.941855 kernel: BTRFS info (device vda6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 04:10:40.941905 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 04:10:40.947784 kernel: BTRFS info (device vda6): turning on async discard Jan 28 04:10:40.947833 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 04:10:40.951195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 04:10:40.997015 ignition[1042]: INFO : Ignition 2.24.0 Jan 28 04:10:40.997015 ignition[1042]: INFO : Stage: files Jan 28 04:10:40.998942 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 04:10:40.998942 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 04:10:41.000792 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Jan 28 04:10:41.004654 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 04:10:41.004654 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 04:10:41.010442 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 04:10:41.011495 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 04:10:41.011495 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 04:10:41.011140 unknown[1042]: wrote ssh authorized keys file for user: core Jan 28 04:10:41.014338 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 04:10:41.014338 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 04:10:41.212550 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 04:10:41.468074 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 04:10:41.469695 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 04:10:41.469695 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 04:10:41.469695 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 04:10:41.469695 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 04:10:41.469695 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 04:10:41.469695 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 04:10:41.478256 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 04:10:41.478256 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 04:10:41.478256 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 04:10:41.478256 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 04:10:41.478256 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 04:10:41.478256 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 04:10:41.478256 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 04:10:41.486557 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 04:10:41.864669 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 04:10:42.977159 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 04:10:42.977159 ignition[1042]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 04:10:42.984792 ignition[1042]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 04:10:42.989112 ignition[1042]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 04:10:42.989112 ignition[1042]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 04:10:42.989112 ignition[1042]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 04:10:42.989112 ignition[1042]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 04:10:42.994238 ignition[1042]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 04:10:42.994238 ignition[1042]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 04:10:42.994238 ignition[1042]: INFO : files: files passed Jan 28 04:10:42.994238 ignition[1042]: INFO : Ignition finished successfully Jan 28 04:10:43.007239 kernel: audit: type=1130 audit(1769573442.997:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:42.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:42.995248 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 04:10:43.000600 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 04:10:43.008340 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 04:10:43.020436 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 04:10:43.020599 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 04:10:43.028213 kernel: audit: type=1130 audit(1769573443.021:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.033134 kernel: audit: type=1131 audit(1769573443.021:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.040851 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 04:10:43.040851 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 04:10:43.043816 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 04:10:43.044781 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 04:10:43.051584 kernel: audit: type=1130 audit(1769573443.045:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.046478 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 04:10:43.053552 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 04:10:43.110637 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 04:10:43.110910 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 04:10:43.122603 kernel: audit: type=1130 audit(1769573443.112:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.122667 kernel: audit: type=1131 audit(1769573443.112:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.112667 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 04:10:43.123291 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 04:10:43.125010 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 04:10:43.126695 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 04:10:43.159390 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 04:10:43.166675 kernel: audit: type=1130 audit(1769573443.160:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.163317 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 04:10:43.189110 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 04:10:43.189551 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 04:10:43.191279 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 04:10:43.193055 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 04:10:43.194556 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 04:10:43.201330 kernel: audit: type=1131 audit(1769573443.196:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.194755 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 04:10:43.201250 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 04:10:43.202241 systemd[1]: Stopped target basic.target - Basic System. Jan 28 04:10:43.203588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 04:10:43.205076 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 04:10:43.206475 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 04:10:43.208024 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 04:10:43.209424 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 04:10:43.210959 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 04:10:43.212352 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 04:10:43.213883 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 04:10:43.215370 systemd[1]: Stopped target swap.target - Swaps. Jan 28 04:10:43.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.216611 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 04:10:43.216805 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 04:10:43.218584 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 04:10:43.219623 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 04:10:43.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.220941 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 04:10:43.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.221182 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 04:10:43.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.222674 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 04:10:43.222947 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 04:10:43.224772 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 04:10:43.224945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 04:10:43.225893 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 04:10:43.226053 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 04:10:43.229330 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 04:10:43.234326 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 04:10:43.235543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 04:10:43.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.235732 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 04:10:43.238364 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 04:10:43.238548 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 04:10:43.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.243862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 04:10:43.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.244026 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 04:10:43.252982 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 04:10:43.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.253174 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 04:10:43.272445 ignition[1097]: INFO : Ignition 2.24.0 Jan 28 04:10:43.272445 ignition[1097]: INFO : Stage: umount Jan 28 04:10:43.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.277227 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 04:10:43.277227 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 04:10:43.277227 ignition[1097]: INFO : umount: umount passed Jan 28 04:10:43.277227 ignition[1097]: INFO : Ignition finished successfully Jan 28 04:10:43.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.275589 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 04:10:43.275807 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 04:10:43.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.277374 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 04:10:43.277489 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 04:10:43.278835 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 04:10:43.278905 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 04:10:43.281081 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 04:10:43.281216 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 04:10:43.282711 systemd[1]: Stopped target network.target - Network. Jan 28 04:10:43.285845 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 04:10:43.285950 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 04:10:43.287306 systemd[1]: Stopped target paths.target - Path Units. Jan 28 04:10:43.288453 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 04:10:43.292156 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 04:10:43.293538 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 04:10:43.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.294730 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 04:10:43.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.296134 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 04:10:43.296203 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 04:10:43.297583 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 04:10:43.297645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 04:10:43.298839 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 04:10:43.298901 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 04:10:43.300325 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 04:10:43.300431 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 04:10:43.301961 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 04:10:43.302031 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 04:10:43.303525 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 04:10:43.304885 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 04:10:43.312253 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 04:10:43.317787 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 04:10:43.317982 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 04:10:43.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.321000 audit: BPF prog-id=6 op=UNLOAD Jan 28 04:10:43.321403 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 04:10:43.321598 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 04:10:43.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.325346 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 04:10:43.327000 audit: BPF prog-id=9 op=UNLOAD Jan 28 04:10:43.326158 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 04:10:43.326243 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 04:10:43.336277 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 04:10:43.336986 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 04:10:43.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.337080 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 04:10:43.338560 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 04:10:43.338634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 04:10:43.339339 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 04:10:43.339439 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 04:10:43.343605 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 04:10:43.355704 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 04:10:43.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.356348 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 04:10:43.361083 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 04:10:43.362342 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 04:10:43.363949 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 04:10:43.364809 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 04:10:43.366248 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 04:10:43.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.366329 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 04:10:43.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.367670 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 04:10:43.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.367745 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 04:10:43.369156 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 04:10:43.369234 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 04:10:43.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.373013 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 04:10:43.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.373754 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 04:10:43.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.373833 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 04:10:43.376512 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 04:10:43.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.376583 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 04:10:43.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.379766 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 04:10:43.379845 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 04:10:43.381335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 04:10:43.381437 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 04:10:43.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.383987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 04:10:43.384068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 04:10:43.386497 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 04:10:43.388224 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 04:10:43.398591 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 04:10:43.398781 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 04:10:43.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.468075 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 04:10:43.468315 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 04:10:43.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.470355 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 04:10:43.471334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 04:10:43.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:43.471449 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 04:10:43.474086 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 04:10:43.500030 systemd[1]: Switching root. Jan 28 04:10:43.537410 systemd-journald[331]: Journal stopped Jan 28 04:10:45.081003 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Jan 28 04:10:45.081084 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 04:10:45.083196 kernel: SELinux: policy capability open_perms=1 Jan 28 04:10:45.083225 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 04:10:45.083267 kernel: SELinux: policy capability always_check_network=0 Jan 28 04:10:45.083295 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 04:10:45.083320 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 04:10:45.083461 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 04:10:45.083493 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 04:10:45.083521 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 04:10:45.083546 systemd[1]: Successfully loaded SELinux policy in 74.095ms. Jan 28 04:10:45.083593 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.233ms. Jan 28 04:10:45.083618 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 04:10:45.083640 systemd[1]: Detected virtualization kvm. Jan 28 04:10:45.083662 systemd[1]: Detected architecture x86-64. Jan 28 04:10:45.085241 systemd[1]: Detected first boot. Jan 28 04:10:45.085311 systemd[1]: Hostname set to . Jan 28 04:10:45.085352 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 04:10:45.085392 zram_generator::config[1140]: No configuration found. Jan 28 04:10:45.085423 kernel: Guest personality initialized and is inactive Jan 28 04:10:45.085444 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 04:10:45.085463 kernel: Initialized host personality Jan 28 04:10:45.085481 kernel: NET: Registered PF_VSOCK protocol family Jan 28 04:10:45.085508 systemd[1]: Populated /etc with preset unit settings. Jan 28 04:10:45.090805 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 04:10:45.090840 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 04:10:45.090875 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 04:10:45.090903 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 04:10:45.090926 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 04:10:45.090947 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 04:10:45.090968 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 04:10:45.091011 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 04:10:45.091035 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 04:10:45.091056 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 04:10:45.091077 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 04:10:45.091131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 04:10:45.091154 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 04:10:45.091187 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 04:10:45.091231 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 04:10:45.091268 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 04:10:45.091289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 04:10:45.091321 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 04:10:45.091341 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 04:10:45.091399 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 04:10:45.091423 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 04:10:45.091444 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 04:10:45.091466 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 04:10:45.091488 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 04:10:45.091509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 04:10:45.091530 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 04:10:45.091567 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 04:10:45.091592 systemd[1]: Reached target slices.target - Slice Units. Jan 28 04:10:45.091613 systemd[1]: Reached target swap.target - Swaps. Jan 28 04:10:45.091634 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 04:10:45.091656 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 04:10:45.091677 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 04:10:45.091698 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 04:10:45.091735 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 04:10:45.091758 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 04:10:45.091780 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 04:10:45.091801 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 04:10:45.091821 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 04:10:45.091842 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 04:10:45.091863 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 04:10:45.091899 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 04:10:45.091923 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 04:10:45.091945 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 04:10:45.091966 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 04:10:45.091988 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 04:10:45.092009 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 04:10:45.092031 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 04:10:45.092067 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 04:10:45.092151 systemd[1]: Reached target machines.target - Containers. Jan 28 04:10:45.092182 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 04:10:45.092204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 04:10:45.092226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 04:10:45.092247 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 04:10:45.092269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 04:10:45.092308 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 04:10:45.092331 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 04:10:45.092352 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 04:10:45.092383 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 04:10:45.092406 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 04:10:45.092427 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 04:10:45.092447 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 04:10:45.092487 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 04:10:45.092511 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 04:10:45.092534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 04:10:45.092572 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 04:10:45.092595 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 04:10:45.092617 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 04:10:45.092639 kernel: fuse: init (API version 7.41) Jan 28 04:10:45.092660 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 04:10:45.092681 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 04:10:45.092702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 04:10:45.092739 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 04:10:45.092764 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 04:10:45.092785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 04:10:45.092806 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 04:10:45.092843 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 04:10:45.092881 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 04:10:45.092943 systemd-journald[1222]: Collecting audit messages is enabled. Jan 28 04:10:45.092983 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 04:10:45.093006 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 04:10:45.093045 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 04:10:45.093083 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 04:10:45.093122 systemd-journald[1222]: Journal started Jan 28 04:10:45.093156 systemd-journald[1222]: Runtime Journal (/run/log/journal/65269eb336c54b659150972d0ebcffd6) is 4.7M, max 37.7M, 33M free. Jan 28 04:10:44.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:44.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:44.988000 audit: BPF prog-id=14 op=UNLOAD Jan 28 04:10:44.988000 audit: BPF prog-id=13 op=UNLOAD Jan 28 04:10:44.993000 audit: BPF prog-id=15 op=LOAD Jan 28 04:10:44.993000 audit: BPF prog-id=16 op=LOAD Jan 28 04:10:44.993000 audit: BPF prog-id=17 op=LOAD Jan 28 04:10:45.078000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 04:10:45.078000 audit[1222]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd7748600 a2=4000 a3=0 items=0 ppid=1 pid=1222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:10:45.078000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 04:10:45.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:44.718184 systemd[1]: Queued start job for default target multi-user.target. Jan 28 04:10:45.096177 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 04:10:45.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:44.735084 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 04:10:44.736003 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 04:10:45.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.098805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 04:10:45.099160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 04:10:45.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.102951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 04:10:45.103564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 04:10:45.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.105721 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 04:10:45.106056 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 04:10:45.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.107712 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 04:10:45.108477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 04:10:45.111006 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 04:10:45.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.113220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 04:10:45.145395 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 04:10:45.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.160074 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 04:10:45.165253 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 04:10:45.181249 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 04:10:45.183209 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 04:10:45.183270 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 04:10:45.188575 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 04:10:45.192403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 04:10:45.192574 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 04:10:45.201160 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 04:10:45.204351 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 04:10:45.206271 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 04:10:45.213470 kernel: ACPI: bus type drm_connector registered Jan 28 04:10:45.209042 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 04:10:45.212282 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 04:10:45.214480 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 04:10:45.220587 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 04:10:45.226466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 04:10:45.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.233237 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 04:10:45.233902 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 04:10:45.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.243475 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 04:10:45.245843 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 04:10:45.248506 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 04:10:45.251789 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 04:10:45.259879 systemd-journald[1222]: Time spent on flushing to /var/log/journal/65269eb336c54b659150972d0ebcffd6 is 105.270ms for 1292 entries. Jan 28 04:10:45.259879 systemd-journald[1222]: System Journal (/var/log/journal/65269eb336c54b659150972d0ebcffd6) is 8M, max 588.1M, 580.1M free. Jan 28 04:10:45.443617 systemd-journald[1222]: Received client request to flush runtime journal. Jan 28 04:10:45.443706 kernel: loop1: detected capacity change from 0 to 111560 Jan 28 04:10:45.443767 kernel: loop2: detected capacity change from 0 to 224512 Jan 28 04:10:45.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.271228 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 04:10:45.451878 kernel: loop3: detected capacity change from 0 to 50784 Jan 28 04:10:45.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.310202 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 04:10:45.312046 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 04:10:45.316735 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 04:10:45.332242 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 28 04:10:45.332262 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 28 04:10:45.355618 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 04:10:45.446426 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 04:10:45.448662 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 04:10:45.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.490274 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 04:10:45.496333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 04:10:45.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.501603 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 04:10:45.535144 kernel: loop4: detected capacity change from 0 to 8 Jan 28 04:10:45.567168 kernel: loop5: detected capacity change from 0 to 111560 Jan 28 04:10:45.572652 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 04:10:45.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:45.578000 audit: BPF prog-id=18 op=LOAD Jan 28 04:10:45.578000 audit: BPF prog-id=19 op=LOAD Jan 28 04:10:45.579000 audit: BPF prog-id=20 op=LOAD Jan 28 04:10:45.583675 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 04:10:45.584126 kernel: loop6: detected capacity change from 0 to 224512 Jan 28 04:10:45.587000 audit: BPF prog-id=21 op=LOAD Jan 28 04:10:45.590539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 04:10:45.596613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 04:10:45.608000 audit: BPF prog-id=22 op=LOAD Jan 28 04:10:45.608000 audit: BPF prog-id=23 op=LOAD Jan 28 04:10:45.609000 audit: BPF prog-id=24 op=LOAD Jan 28 04:10:45.613424 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 04:10:45.619000 audit: BPF prog-id=25 op=LOAD Jan 28 04:10:45.619000 audit: BPF prog-id=26 op=LOAD Jan 28 04:10:45.619000 audit: BPF prog-id=27 op=LOAD Jan 28 04:10:45.624126 kernel: loop7: detected capacity change from 0 to 50784 Jan 28 04:10:45.624152 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 04:10:45.646180 kernel: loop1: detected capacity change from 0 to 8 Jan 28 04:10:45.657955 (sd-merge)[1304]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Jan 28 04:10:45.685835 (sd-merge)[1304]: Merged extensions into '/usr'. Jan 28 04:10:45.701305 systemd[1]: Reload requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 04:10:45.701331 systemd[1]: Reloading... Jan 28 04:10:45.722807 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jan 28 04:10:45.726210 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jan 28 04:10:45.748498 systemd-nsresourced[1310]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 04:10:45.864142 zram_generator::config[1353]: No configuration found. Jan 28 04:10:46.025970 systemd-resolved[1307]: Positive Trust Anchors: Jan 28 04:10:46.028147 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 04:10:46.028162 systemd-resolved[1307]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 04:10:46.028208 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 04:10:46.044358 systemd-oomd[1306]: No swap; memory pressure usage will be degraded Jan 28 04:10:46.060545 systemd-resolved[1307]: Using system hostname 'srv-3avyi.gb1.brightbox.com'. Jan 28 04:10:46.263176 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 04:10:46.263364 systemd[1]: Reloading finished in 561 ms. Jan 28 04:10:46.293463 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 04:10:46.300177 kernel: kauditd_printk_skb: 100 callbacks suppressed Jan 28 04:10:46.300269 kernel: audit: type=1130 audit(1769573446.294:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.299734 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 04:10:46.301490 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 04:10:46.302534 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 04:10:46.307591 kernel: audit: type=1130 audit(1769573446.300:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.307777 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 04:10:46.309368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 04:10:46.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.316178 kernel: audit: type=1130 audit(1769573446.302:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.325150 kernel: audit: type=1130 audit(1769573446.307:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.325430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 04:10:46.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.330173 kernel: audit: type=1130 audit(1769573446.308:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.335129 kernel: audit: type=1130 audit(1769573446.310:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.344210 systemd[1]: Starting ensure-sysext.service... Jan 28 04:10:46.347868 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 04:10:46.351000 audit: BPF prog-id=28 op=LOAD Jan 28 04:10:46.353129 kernel: audit: type=1334 audit(1769573446.351:152): prog-id=28 op=LOAD Jan 28 04:10:46.359000 audit: BPF prog-id=15 op=UNLOAD Jan 28 04:10:46.364118 kernel: audit: type=1334 audit(1769573446.359:153): prog-id=15 op=UNLOAD Jan 28 04:10:46.370152 kernel: audit: type=1334 audit(1769573446.360:154): prog-id=29 op=LOAD Jan 28 04:10:46.370228 kernel: audit: type=1334 audit(1769573446.360:155): prog-id=30 op=LOAD Jan 28 04:10:46.360000 audit: BPF prog-id=29 op=LOAD Jan 28 04:10:46.360000 audit: BPF prog-id=30 op=LOAD Jan 28 04:10:46.360000 audit: BPF prog-id=16 op=UNLOAD Jan 28 04:10:46.360000 audit: BPF prog-id=17 op=UNLOAD Jan 28 04:10:46.361000 audit: BPF prog-id=31 op=LOAD Jan 28 04:10:46.361000 audit: BPF prog-id=21 op=UNLOAD Jan 28 04:10:46.369000 audit: BPF prog-id=32 op=LOAD Jan 28 04:10:46.369000 audit: BPF prog-id=18 op=UNLOAD Jan 28 04:10:46.369000 audit: BPF prog-id=33 op=LOAD Jan 28 04:10:46.369000 audit: BPF prog-id=34 op=LOAD Jan 28 04:10:46.369000 audit: BPF prog-id=19 op=UNLOAD Jan 28 04:10:46.369000 audit: BPF prog-id=20 op=UNLOAD Jan 28 04:10:46.373000 audit: BPF prog-id=35 op=LOAD Jan 28 04:10:46.373000 audit: BPF prog-id=22 op=UNLOAD Jan 28 04:10:46.373000 audit: BPF prog-id=36 op=LOAD Jan 28 04:10:46.373000 audit: BPF prog-id=37 op=LOAD Jan 28 04:10:46.373000 audit: BPF prog-id=23 op=UNLOAD Jan 28 04:10:46.373000 audit: BPF prog-id=24 op=UNLOAD Jan 28 04:10:46.374000 audit: BPF prog-id=38 op=LOAD Jan 28 04:10:46.374000 audit: BPF prog-id=25 op=UNLOAD Jan 28 04:10:46.374000 audit: BPF prog-id=39 op=LOAD Jan 28 04:10:46.374000 audit: BPF prog-id=40 op=LOAD Jan 28 04:10:46.374000 audit: BPF prog-id=26 op=UNLOAD Jan 28 04:10:46.374000 audit: BPF prog-id=27 op=UNLOAD Jan 28 04:10:46.391568 systemd[1]: Reload requested from client PID 1410 ('systemctl') (unit ensure-sysext.service)... Jan 28 04:10:46.391593 systemd[1]: Reloading... Jan 28 04:10:46.394470 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 04:10:46.394523 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 04:10:46.394902 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 04:10:46.396901 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Jan 28 04:10:46.397029 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Jan 28 04:10:46.407292 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 04:10:46.407312 systemd-tmpfiles[1411]: Skipping /boot Jan 28 04:10:46.430650 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 04:10:46.430669 systemd-tmpfiles[1411]: Skipping /boot Jan 28 04:10:46.544126 zram_generator::config[1458]: No configuration found. Jan 28 04:10:46.797886 systemd[1]: Reloading finished in 405 ms. Jan 28 04:10:46.812892 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 04:10:46.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.816000 audit: BPF prog-id=41 op=LOAD Jan 28 04:10:46.816000 audit: BPF prog-id=28 op=UNLOAD Jan 28 04:10:46.816000 audit: BPF prog-id=42 op=LOAD Jan 28 04:10:46.816000 audit: BPF prog-id=43 op=LOAD Jan 28 04:10:46.816000 audit: BPF prog-id=29 op=UNLOAD Jan 28 04:10:46.816000 audit: BPF prog-id=30 op=UNLOAD Jan 28 04:10:46.817000 audit: BPF prog-id=44 op=LOAD Jan 28 04:10:46.817000 audit: BPF prog-id=35 op=UNLOAD Jan 28 04:10:46.818000 audit: BPF prog-id=45 op=LOAD Jan 28 04:10:46.818000 audit: BPF prog-id=46 op=LOAD Jan 28 04:10:46.818000 audit: BPF prog-id=36 op=UNLOAD Jan 28 04:10:46.818000 audit: BPF prog-id=37 op=UNLOAD Jan 28 04:10:46.819000 audit: BPF prog-id=47 op=LOAD Jan 28 04:10:46.819000 audit: BPF prog-id=31 op=UNLOAD Jan 28 04:10:46.830000 audit: BPF prog-id=48 op=LOAD Jan 28 04:10:46.831000 audit: BPF prog-id=38 op=UNLOAD Jan 28 04:10:46.831000 audit: BPF prog-id=49 op=LOAD Jan 28 04:10:46.831000 audit: BPF prog-id=50 op=LOAD Jan 28 04:10:46.831000 audit: BPF prog-id=39 op=UNLOAD Jan 28 04:10:46.831000 audit: BPF prog-id=40 op=UNLOAD Jan 28 04:10:46.833000 audit: BPF prog-id=51 op=LOAD Jan 28 04:10:46.834000 audit: BPF prog-id=32 op=UNLOAD Jan 28 04:10:46.834000 audit: BPF prog-id=52 op=LOAD Jan 28 04:10:46.834000 audit: BPF prog-id=53 op=LOAD Jan 28 04:10:46.834000 audit: BPF prog-id=33 op=UNLOAD Jan 28 04:10:46.834000 audit: BPF prog-id=34 op=UNLOAD Jan 28 04:10:46.838966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 04:10:46.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.851422 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 04:10:46.856419 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 04:10:46.862304 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 04:10:46.865297 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 04:10:46.867000 audit: BPF prog-id=8 op=UNLOAD Jan 28 04:10:46.867000 audit: BPF prog-id=7 op=UNLOAD Jan 28 04:10:46.868000 audit: BPF prog-id=54 op=LOAD Jan 28 04:10:46.868000 audit: BPF prog-id=55 op=LOAD Jan 28 04:10:46.871543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 04:10:46.876909 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 04:10:46.882364 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 04:10:46.882650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 04:10:46.887688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 04:10:46.903186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 04:10:46.918860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 04:10:46.919789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 04:10:46.920069 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 04:10:46.921368 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 04:10:46.921520 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 04:10:46.930618 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 04:10:46.930890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 04:10:46.931180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 04:10:46.932000 audit[1506]: SYSTEM_BOOT pid=1506 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.931426 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 04:10:46.931559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 04:10:46.931685 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 04:10:46.938402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 04:10:46.938785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 04:10:46.942486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 04:10:46.943491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 04:10:46.943803 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 04:10:46.944015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 04:10:46.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.944315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 04:10:46.949041 systemd[1]: Finished ensure-sysext.service. Jan 28 04:10:46.955000 audit: BPF prog-id=56 op=LOAD Jan 28 04:10:46.960407 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 04:10:46.966720 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 04:10:46.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:46.992381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 04:10:46.993550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 04:10:47.020679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 04:10:47.024379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 04:10:47.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:47.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:47.028163 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 04:10:47.028562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 04:10:47.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:47.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:47.031120 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 04:10:47.037547 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 04:10:47.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:47.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:47.042934 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 04:10:47.043111 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 04:10:47.048152 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 04:10:47.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:10:47.051075 systemd-udevd[1505]: Using default interface naming scheme 'v257'. Jan 28 04:10:47.113685 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 04:10:47.116000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 04:10:47.116000 audit[1542]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd70135a30 a2=420 a3=0 items=0 ppid=1501 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:10:47.116000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 04:10:47.116883 augenrules[1542]: No rules Jan 28 04:10:47.119020 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 04:10:47.120183 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 04:10:47.123309 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 04:10:47.126972 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 04:10:47.128590 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 04:10:47.137141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 04:10:47.142690 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 04:10:47.262364 systemd-networkd[1551]: lo: Link UP Jan 28 04:10:47.262378 systemd-networkd[1551]: lo: Gained carrier Jan 28 04:10:47.264580 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 04:10:47.265941 systemd[1]: Reached target network.target - Network. Jan 28 04:10:47.282419 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 04:10:47.287438 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 04:10:47.362379 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 04:10:47.395962 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 04:10:47.560180 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 04:10:47.596165 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 28 04:10:47.598986 systemd-networkd[1551]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 04:10:47.598999 systemd-networkd[1551]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 04:10:47.602374 systemd-networkd[1551]: eth0: Link UP Jan 28 04:10:47.603041 systemd-networkd[1551]: eth0: Gained carrier Jan 28 04:10:47.603075 systemd-networkd[1551]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 04:10:47.622434 systemd-networkd[1551]: eth0: DHCPv4 address 10.230.66.102/30, gateway 10.230.66.101 acquired from 10.230.66.101 Jan 28 04:10:47.623665 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Jan 28 04:10:47.628132 kernel: ACPI: button: Power Button [PWRF] Jan 28 04:10:47.704671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 04:10:47.708784 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 04:10:47.730823 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 04:10:47.735429 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 04:10:47.758148 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 04:10:47.924815 ldconfig[1503]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 04:10:47.929821 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 04:10:47.983529 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 04:10:48.029690 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 04:10:48.073248 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 04:10:48.213588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 04:10:48.217053 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 04:10:48.218180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 04:10:48.219259 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 04:10:48.220144 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 04:10:48.221601 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 04:10:48.222864 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 04:10:48.224207 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 04:10:48.225518 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 04:10:48.226536 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 04:10:48.227765 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 04:10:48.227843 systemd[1]: Reached target paths.target - Path Units. Jan 28 04:10:48.228961 systemd[1]: Reached target timers.target - Timer Units. Jan 28 04:10:48.232123 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 04:10:48.236471 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 04:10:48.242954 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 04:10:48.244924 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 04:10:48.246718 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 04:10:48.289394 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 04:10:48.290915 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 04:10:48.292932 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 04:10:48.297688 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 04:10:48.298381 systemd[1]: Reached target basic.target - Basic System. Jan 28 04:10:48.300477 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 04:10:48.300626 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 04:10:48.307451 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 04:10:48.314938 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 04:10:48.319780 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 04:10:48.324533 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 04:10:48.331898 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 04:10:48.335951 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 04:10:48.337194 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 04:10:48.346253 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 04:10:48.354243 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 04:10:48.361356 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 04:10:48.382888 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 04:10:48.397153 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:48.409497 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 04:10:48.426294 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 04:10:48.428245 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 04:10:48.429288 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 04:10:48.441587 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 04:10:48.443471 jq[1608]: false Jan 28 04:10:48.449396 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 04:10:48.462535 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 04:10:48.463933 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 04:10:48.465541 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 04:10:48.468114 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Refreshing passwd entry cache Jan 28 04:10:48.466588 oslogin_cache_refresh[1610]: Refreshing passwd entry cache Jan 28 04:10:48.503078 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 04:10:48.504983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 04:10:48.507493 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 04:10:48.519150 update_engine[1618]: I20260128 04:10:48.519019 1618 main.cc:92] Flatcar Update Engine starting Jan 28 04:10:48.524069 extend-filesystems[1609]: Found /dev/vda6 Jan 28 04:10:48.533803 jq[1619]: true Jan 28 04:10:48.544680 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Failure getting users, quitting Jan 28 04:10:48.544680 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 04:10:48.544680 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Refreshing group entry cache Jan 28 04:10:48.543616 oslogin_cache_refresh[1610]: Failure getting users, quitting Jan 28 04:10:48.543668 oslogin_cache_refresh[1610]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 04:10:48.543752 oslogin_cache_refresh[1610]: Refreshing group entry cache Jan 28 04:10:48.546457 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Failure getting groups, quitting Jan 28 04:10:48.546457 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 04:10:48.545263 oslogin_cache_refresh[1610]: Failure getting groups, quitting Jan 28 04:10:48.545285 oslogin_cache_refresh[1610]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 04:10:48.548810 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 04:10:48.549353 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 04:10:48.556117 extend-filesystems[1609]: Found /dev/vda9 Jan 28 04:10:48.569934 tar[1624]: linux-amd64/LICENSE Jan 28 04:10:48.569934 tar[1624]: linux-amd64/helm Jan 28 04:10:48.574644 extend-filesystems[1609]: Checking size of /dev/vda9 Jan 28 04:10:48.586170 dbus-daemon[1606]: [system] SELinux support is enabled Jan 28 04:10:48.586594 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 04:10:48.593339 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 04:10:48.593406 dbus-daemon[1606]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1551 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 28 04:10:48.593390 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 04:10:48.595236 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 04:10:48.595297 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 04:10:48.597858 update_engine[1618]: I20260128 04:10:48.597577 1618 update_check_scheduler.cc:74] Next update check in 6m16s Jan 28 04:10:48.599777 systemd[1]: Started update-engine.service - Update Engine. Jan 28 04:10:48.601078 dbus-daemon[1606]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 04:10:48.605267 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 04:10:48.625633 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 28 04:10:48.629158 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 04:10:48.629636 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 04:10:48.652379 jq[1649]: true Jan 28 04:10:48.656145 extend-filesystems[1609]: Resized partition /dev/vda9 Jan 28 04:10:48.675135 extend-filesystems[1663]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 04:10:48.705196 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Jan 28 04:10:48.987118 systemd-logind[1617]: Watching system buttons on /dev/input/event3 (Power Button) Jan 28 04:10:48.990162 systemd-logind[1617]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 04:10:48.990775 systemd-logind[1617]: New seat seat0. Jan 28 04:10:48.992475 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 04:10:49.084672 systemd-networkd[1551]: eth0: Gained IPv6LL Jan 28 04:10:49.087565 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Jan 28 04:10:49.101277 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 04:10:49.103891 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 04:10:49.113430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:10:49.117474 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 04:10:49.130217 bash[1679]: Updated "/home/core/.ssh/authorized_keys" Jan 28 04:10:49.134676 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 04:10:49.145933 systemd[1]: Starting sshkeys.service... Jan 28 04:10:49.159652 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 28 04:10:49.177187 locksmithd[1656]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 04:10:49.190064 dbus-daemon[1606]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 28 04:10:49.230265 dbus-daemon[1606]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1657 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 28 04:10:49.255856 systemd[1]: Starting polkit.service - Authorization Manager... Jan 28 04:10:49.311081 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 28 04:10:49.320841 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 28 04:10:49.367121 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 04:10:49.421149 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:49.475136 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:49.513082 polkitd[1698]: Started polkitd version 126 Jan 28 04:10:49.603127 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 28 04:10:49.636974 polkitd[1698]: Loading rules from directory /etc/polkit-1/rules.d Jan 28 04:10:49.639449 polkitd[1698]: Loading rules from directory /run/polkit-1/rules.d Jan 28 04:10:49.640039 polkitd[1698]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 28 04:10:49.641587 polkitd[1698]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 28 04:10:49.641638 polkitd[1698]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 28 04:10:49.641719 polkitd[1698]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 28 04:10:49.645181 polkitd[1698]: Finished loading, compiling and executing 2 rules Jan 28 04:10:49.646876 extend-filesystems[1663]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 04:10:49.646876 extend-filesystems[1663]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 28 04:10:49.646876 extend-filesystems[1663]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 28 04:10:49.653460 extend-filesystems[1609]: Resized filesystem in /dev/vda9 Jan 28 04:10:49.652082 dbus-daemon[1606]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 28 04:10:49.651488 systemd[1]: Started polkit.service - Authorization Manager. Jan 28 04:10:49.659755 containerd[1648]: time="2026-01-28T04:10:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 04:10:49.659755 containerd[1648]: time="2026-01-28T04:10:49.658180807Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 04:10:49.654488 polkitd[1698]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 28 04:10:49.656487 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 04:10:49.658129 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 04:10:49.713493 systemd-hostnamed[1657]: Hostname set to (static) Jan 28 04:10:49.721836 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Jan 28 04:10:49.724718 systemd-networkd[1551]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9099:24:19ff:fee6:4266/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9099:24:19ff:fee6:4266/64 assigned by NDisc. Jan 28 04:10:49.724889 systemd-networkd[1551]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 04:10:49.753137 containerd[1648]: time="2026-01-28T04:10:49.752456969Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.917µs" Jan 28 04:10:49.753137 containerd[1648]: time="2026-01-28T04:10:49.752516397Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 04:10:49.753137 containerd[1648]: time="2026-01-28T04:10:49.752611856Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 04:10:49.753137 containerd[1648]: time="2026-01-28T04:10:49.752664435Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 04:10:49.753137 containerd[1648]: time="2026-01-28T04:10:49.753028195Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 04:10:49.753137 containerd[1648]: time="2026-01-28T04:10:49.753077773Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 04:10:49.771535 containerd[1648]: time="2026-01-28T04:10:49.771395929Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 04:10:49.771759 containerd[1648]: time="2026-01-28T04:10:49.771734661Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 04:10:49.772212 containerd[1648]: time="2026-01-28T04:10:49.772180995Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 04:10:49.772326 containerd[1648]: time="2026-01-28T04:10:49.772301608Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 04:10:49.772460 containerd[1648]: time="2026-01-28T04:10:49.772434303Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 04:10:49.772544 containerd[1648]: time="2026-01-28T04:10:49.772522858Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 04:10:49.772951 containerd[1648]: time="2026-01-28T04:10:49.772923995Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 04:10:49.773081 containerd[1648]: time="2026-01-28T04:10:49.773058548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 04:10:49.773458 containerd[1648]: time="2026-01-28T04:10:49.773431550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 04:10:49.773998 containerd[1648]: time="2026-01-28T04:10:49.773970426Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 04:10:49.774164 containerd[1648]: time="2026-01-28T04:10:49.774126497Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 04:10:49.774261 containerd[1648]: time="2026-01-28T04:10:49.774240550Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 04:10:49.774455 containerd[1648]: time="2026-01-28T04:10:49.774428853Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 04:10:49.774951 containerd[1648]: time="2026-01-28T04:10:49.774924881Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 04:10:49.775171 containerd[1648]: time="2026-01-28T04:10:49.775147055Z" level=info msg="metadata content store policy set" policy=shared Jan 28 04:10:49.797575 containerd[1648]: time="2026-01-28T04:10:49.797486443Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.797697649Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.797907663Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.797934767Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.797967840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.797993025Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.798013879Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.798032641Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.798052792Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 04:10:49.798111 containerd[1648]: time="2026-01-28T04:10:49.798073694Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.798826784Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.798863599Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.798883638Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.798905550Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800645791Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800695991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800722655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800742414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800765290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800784403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800807812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800827433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800851762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800873626Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 04:10:49.801117 containerd[1648]: time="2026-01-28T04:10:49.800892166Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 04:10:49.801685 containerd[1648]: time="2026-01-28T04:10:49.800946319Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 04:10:49.801685 containerd[1648]: time="2026-01-28T04:10:49.801024052Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 04:10:49.801685 containerd[1648]: time="2026-01-28T04:10:49.801046994Z" level=info msg="Start snapshots syncer" Jan 28 04:10:49.802437 containerd[1648]: time="2026-01-28T04:10:49.801085744Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 04:10:49.803294 containerd[1648]: time="2026-01-28T04:10:49.802920466Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805442957Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805569683Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805756492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805790980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805810740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805829866Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805860394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805881340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805900385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805932484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.805952865Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.806008431Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.806035551Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 04:10:49.807734 containerd[1648]: time="2026-01-28T04:10:49.806052366Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 04:10:49.808403 containerd[1648]: time="2026-01-28T04:10:49.806068415Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 04:10:49.810457 containerd[1648]: time="2026-01-28T04:10:49.806084548Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 04:10:49.810457 containerd[1648]: time="2026-01-28T04:10:49.808793473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 04:10:49.810457 containerd[1648]: time="2026-01-28T04:10:49.808873496Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 04:10:49.810457 containerd[1648]: time="2026-01-28T04:10:49.808915482Z" level=info msg="runtime interface created" Jan 28 04:10:49.810457 containerd[1648]: time="2026-01-28T04:10:49.808927556Z" level=info msg="created NRI interface" Jan 28 04:10:49.810457 containerd[1648]: time="2026-01-28T04:10:49.808941875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 04:10:49.810457 containerd[1648]: time="2026-01-28T04:10:49.808974745Z" level=info msg="Connect containerd service" Jan 28 04:10:49.810457 containerd[1648]: time="2026-01-28T04:10:49.809005974Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 04:10:49.817106 containerd[1648]: time="2026-01-28T04:10:49.816530910Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 04:10:49.914247 sshd_keygen[1634]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 04:10:50.039227 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 04:10:50.054987 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 04:10:50.059436 systemd[1]: Started sshd@0-10.230.66.102:22-4.153.228.146:50048.service - OpenSSH per-connection server daemon (4.153.228.146:50048). Jan 28 04:10:50.128182 containerd[1648]: time="2026-01-28T04:10:50.127012834Z" level=info msg="Start subscribing containerd event" Jan 28 04:10:50.131588 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 04:10:50.132467 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.133362523Z" level=info msg="Start recovering state" Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.133570774Z" level=info msg="Start event monitor" Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.133596014Z" level=info msg="Start cni network conf syncer for default" Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.133612353Z" level=info msg="Start streaming server" Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.133634105Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.133648927Z" level=info msg="runtime interface starting up..." Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.133663149Z" level=info msg="starting plugins..." Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.133721052Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 04:10:50.134140 containerd[1648]: time="2026-01-28T04:10:50.127851252Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 04:10:50.142434 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 04:10:50.146982 containerd[1648]: time="2026-01-28T04:10:50.146087256Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 04:10:50.161145 containerd[1648]: time="2026-01-28T04:10:50.160604910Z" level=info msg="containerd successfully booted in 0.515645s" Jan 28 04:10:50.160911 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 04:10:50.231716 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 04:10:50.239570 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 04:10:50.244525 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 04:10:50.246666 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 04:10:50.552534 tar[1624]: linux-amd64/README.md Jan 28 04:10:50.588225 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 04:10:50.634139 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:50.793199 sshd[1741]: Accepted publickey for core from 4.153.228.146 port 50048 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:10:50.797775 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:10:50.824961 systemd-logind[1617]: New session 1 of user core. Jan 28 04:10:50.828838 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 04:10:50.843366 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 04:10:50.898565 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 04:10:50.904611 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 04:10:50.930058 (systemd)[1759]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:10:50.949680 systemd-logind[1617]: New session 2 of user core. Jan 28 04:10:51.135596 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Jan 28 04:10:51.155300 systemd[1759]: Queued start job for default target default.target. Jan 28 04:10:51.178212 systemd[1759]: Created slice app.slice - User Application Slice. Jan 28 04:10:51.178498 systemd[1759]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 04:10:51.178530 systemd[1759]: Reached target paths.target - Paths. Jan 28 04:10:51.178636 systemd[1759]: Reached target timers.target - Timers. Jan 28 04:10:51.181601 systemd[1759]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 04:10:51.185190 systemd[1759]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 04:10:51.218456 systemd[1759]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 04:10:51.228572 systemd[1759]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 04:10:51.230988 systemd[1759]: Reached target sockets.target - Sockets. Jan 28 04:10:51.231228 systemd[1759]: Reached target basic.target - Basic System. Jan 28 04:10:51.231643 systemd[1759]: Reached target default.target - Main User Target. Jan 28 04:10:51.232029 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 04:10:51.234218 systemd[1759]: Startup finished in 272ms. Jan 28 04:10:51.241944 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 04:10:51.478986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:10:51.490696 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 04:10:51.544250 systemd[1]: Started sshd@1-10.230.66.102:22-4.153.228.146:50052.service - OpenSSH per-connection server daemon (4.153.228.146:50052). Jan 28 04:10:51.636128 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:52.073181 sshd[1779]: Accepted publickey for core from 4.153.228.146 port 50052 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:10:52.076067 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:10:52.091184 systemd-logind[1617]: New session 3 of user core. Jan 28 04:10:52.098535 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 04:10:52.224469 kubelet[1776]: E0128 04:10:52.224386 1776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 04:10:52.228017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 04:10:52.228355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 04:10:52.229555 systemd[1]: kubelet.service: Consumed 1.592s CPU time, 264.5M memory peak. Jan 28 04:10:52.356688 sshd[1788]: Connection closed by 4.153.228.146 port 50052 Jan 28 04:10:52.357554 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 28 04:10:52.364974 systemd[1]: sshd@1-10.230.66.102:22-4.153.228.146:50052.service: Deactivated successfully. Jan 28 04:10:52.367974 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 04:10:52.369546 systemd-logind[1617]: Session 3 logged out. Waiting for processes to exit. Jan 28 04:10:52.371909 systemd-logind[1617]: Removed session 3. Jan 28 04:10:52.455469 systemd[1]: Started sshd@2-10.230.66.102:22-4.153.228.146:50066.service - OpenSSH per-connection server daemon (4.153.228.146:50066). Jan 28 04:10:52.667131 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:52.973142 sshd[1796]: Accepted publickey for core from 4.153.228.146 port 50066 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:10:52.974621 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:10:52.981800 systemd-logind[1617]: New session 4 of user core. Jan 28 04:10:52.989530 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 04:10:53.244839 sshd[1801]: Connection closed by 4.153.228.146 port 50066 Jan 28 04:10:53.245662 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Jan 28 04:10:53.251564 systemd[1]: sshd@2-10.230.66.102:22-4.153.228.146:50066.service: Deactivated successfully. Jan 28 04:10:53.254527 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 04:10:53.256560 systemd-logind[1617]: Session 4 logged out. Waiting for processes to exit. Jan 28 04:10:53.258964 systemd-logind[1617]: Removed session 4. Jan 28 04:10:55.662621 login[1750]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:10:55.675890 systemd-logind[1617]: New session 5 of user core. Jan 28 04:10:55.679122 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:55.682509 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 04:10:55.695774 login[1749]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:10:55.706895 coreos-metadata[1605]: Jan 28 04:10:55.706 WARN failed to locate config-drive, using the metadata service API instead Jan 28 04:10:55.710707 systemd-logind[1617]: New session 6 of user core. Jan 28 04:10:55.719468 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 04:10:55.753237 coreos-metadata[1605]: Jan 28 04:10:55.752 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 28 04:10:55.760369 coreos-metadata[1605]: Jan 28 04:10:55.760 INFO Fetch failed with 404: resource not found Jan 28 04:10:55.760567 coreos-metadata[1605]: Jan 28 04:10:55.760 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 04:10:55.762165 coreos-metadata[1605]: Jan 28 04:10:55.761 INFO Fetch successful Jan 28 04:10:55.762165 coreos-metadata[1605]: Jan 28 04:10:55.762 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 28 04:10:55.773089 coreos-metadata[1605]: Jan 28 04:10:55.773 INFO Fetch successful Jan 28 04:10:55.773428 coreos-metadata[1605]: Jan 28 04:10:55.773 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 28 04:10:55.789068 coreos-metadata[1605]: Jan 28 04:10:55.789 INFO Fetch successful Jan 28 04:10:55.789443 coreos-metadata[1605]: Jan 28 04:10:55.789 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 28 04:10:55.809505 coreos-metadata[1605]: Jan 28 04:10:55.809 INFO Fetch successful Jan 28 04:10:55.809815 coreos-metadata[1605]: Jan 28 04:10:55.809 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 28 04:10:55.829297 coreos-metadata[1605]: Jan 28 04:10:55.829 INFO Fetch successful Jan 28 04:10:55.869455 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 04:10:55.872148 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 04:10:56.698196 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 28 04:10:56.713751 coreos-metadata[1702]: Jan 28 04:10:56.713 WARN failed to locate config-drive, using the metadata service API instead Jan 28 04:10:56.736772 coreos-metadata[1702]: Jan 28 04:10:56.736 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 28 04:10:56.779500 coreos-metadata[1702]: Jan 28 04:10:56.779 INFO Fetch successful Jan 28 04:10:56.779701 coreos-metadata[1702]: Jan 28 04:10:56.779 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 28 04:10:56.811354 coreos-metadata[1702]: Jan 28 04:10:56.811 INFO Fetch successful Jan 28 04:10:56.814015 unknown[1702]: wrote ssh authorized keys file for user: core Jan 28 04:10:56.851595 update-ssh-keys[1842]: Updated "/home/core/.ssh/authorized_keys" Jan 28 04:10:56.854008 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 28 04:10:56.857752 systemd[1]: Finished sshkeys.service. Jan 28 04:10:56.859600 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 04:10:56.861247 systemd[1]: Startup finished in 3.827s (kernel) + 13.747s (initrd) + 13.114s (userspace) = 30.689s. Jan 28 04:11:02.235059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 04:11:02.238267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:11:02.621519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:02.633542 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 04:11:02.724877 kubelet[1854]: E0128 04:11:02.724797 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 04:11:02.729495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 04:11:02.729749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 04:11:02.730727 systemd[1]: kubelet.service: Consumed 422ms CPU time, 108.4M memory peak. Jan 28 04:11:03.354482 systemd[1]: Started sshd@3-10.230.66.102:22-4.153.228.146:35918.service - OpenSSH per-connection server daemon (4.153.228.146:35918). Jan 28 04:11:03.910419 sshd[1862]: Accepted publickey for core from 4.153.228.146 port 35918 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:11:03.912758 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:11:03.921352 systemd-logind[1617]: New session 7 of user core. Jan 28 04:11:03.934509 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 04:11:04.195849 sshd[1866]: Connection closed by 4.153.228.146 port 35918 Jan 28 04:11:04.197257 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Jan 28 04:11:04.205351 systemd[1]: sshd@3-10.230.66.102:22-4.153.228.146:35918.service: Deactivated successfully. Jan 28 04:11:04.208901 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 04:11:04.210718 systemd-logind[1617]: Session 7 logged out. Waiting for processes to exit. Jan 28 04:11:04.213624 systemd-logind[1617]: Removed session 7. Jan 28 04:11:04.297149 systemd[1]: Started sshd@4-10.230.66.102:22-4.153.228.146:48150.service - OpenSSH per-connection server daemon (4.153.228.146:48150). Jan 28 04:11:04.810148 sshd[1872]: Accepted publickey for core from 4.153.228.146 port 48150 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:11:04.811491 sshd-session[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:11:04.819311 systemd-logind[1617]: New session 8 of user core. Jan 28 04:11:04.827355 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 04:11:05.077911 sshd[1876]: Connection closed by 4.153.228.146 port 48150 Jan 28 04:11:05.079156 sshd-session[1872]: pam_unix(sshd:session): session closed for user core Jan 28 04:11:05.084849 systemd[1]: sshd@4-10.230.66.102:22-4.153.228.146:48150.service: Deactivated successfully. Jan 28 04:11:05.087715 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 04:11:05.090750 systemd-logind[1617]: Session 8 logged out. Waiting for processes to exit. Jan 28 04:11:05.092309 systemd-logind[1617]: Removed session 8. Jan 28 04:11:05.200486 systemd[1]: Started sshd@5-10.230.66.102:22-4.153.228.146:48166.service - OpenSSH per-connection server daemon (4.153.228.146:48166). Jan 28 04:11:05.735210 sshd[1882]: Accepted publickey for core from 4.153.228.146 port 48166 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:11:05.738193 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:11:05.749156 systemd-logind[1617]: New session 9 of user core. Jan 28 04:11:05.755391 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 04:11:06.024308 sshd[1886]: Connection closed by 4.153.228.146 port 48166 Jan 28 04:11:06.025218 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Jan 28 04:11:06.032889 systemd[1]: sshd@5-10.230.66.102:22-4.153.228.146:48166.service: Deactivated successfully. Jan 28 04:11:06.035823 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 04:11:06.037718 systemd-logind[1617]: Session 9 logged out. Waiting for processes to exit. Jan 28 04:11:06.039540 systemd-logind[1617]: Removed session 9. Jan 28 04:11:06.125845 systemd[1]: Started sshd@6-10.230.66.102:22-4.153.228.146:48170.service - OpenSSH per-connection server daemon (4.153.228.146:48170). Jan 28 04:11:06.641551 sshd[1892]: Accepted publickey for core from 4.153.228.146 port 48170 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:11:06.644194 sshd-session[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:11:06.652214 systemd-logind[1617]: New session 10 of user core. Jan 28 04:11:06.661382 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 04:11:06.848661 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 04:11:06.849287 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 04:11:06.863657 sudo[1897]: pam_unix(sudo:session): session closed for user root Jan 28 04:11:06.953029 sshd[1896]: Connection closed by 4.153.228.146 port 48170 Jan 28 04:11:06.954501 sshd-session[1892]: pam_unix(sshd:session): session closed for user core Jan 28 04:11:06.962901 systemd[1]: sshd@6-10.230.66.102:22-4.153.228.146:48170.service: Deactivated successfully. Jan 28 04:11:06.965917 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 04:11:06.968422 systemd-logind[1617]: Session 10 logged out. Waiting for processes to exit. Jan 28 04:11:06.970601 systemd-logind[1617]: Removed session 10. Jan 28 04:11:07.060542 systemd[1]: Started sshd@7-10.230.66.102:22-4.153.228.146:48186.service - OpenSSH per-connection server daemon (4.153.228.146:48186). Jan 28 04:11:07.583859 sshd[1904]: Accepted publickey for core from 4.153.228.146 port 48186 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:11:07.586179 sshd-session[1904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:11:07.596364 systemd-logind[1617]: New session 11 of user core. Jan 28 04:11:07.606452 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 04:11:07.774738 sudo[1910]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 04:11:07.775317 sudo[1910]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 04:11:07.779892 sudo[1910]: pam_unix(sudo:session): session closed for user root Jan 28 04:11:07.792227 sudo[1909]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 04:11:07.792734 sudo[1909]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 04:11:07.804946 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 04:11:07.863000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 04:11:07.865534 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 28 04:11:07.865633 kernel: audit: type=1305 audit(1769573467.863:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 04:11:07.869114 augenrules[1934]: No rules Jan 28 04:11:07.863000 audit[1934]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa63d0f70 a2=420 a3=0 items=0 ppid=1915 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:07.871635 kernel: audit: type=1300 audit(1769573467.863:224): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa63d0f70 a2=420 a3=0 items=0 ppid=1915 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:07.876131 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 04:11:07.876878 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 04:11:07.863000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 04:11:07.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.883545 sudo[1909]: pam_unix(sudo:session): session closed for user root Jan 28 04:11:07.884678 kernel: audit: type=1327 audit(1769573467.863:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 04:11:07.884783 kernel: audit: type=1130 audit(1769573467.877:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.888777 kernel: audit: type=1131 audit(1769573467.877:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.883000 audit[1909]: USER_END pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.892608 kernel: audit: type=1106 audit(1769573467.883:227): pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.883000 audit[1909]: CRED_DISP pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.896634 kernel: audit: type=1104 audit(1769573467.883:228): pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.975866 sshd[1908]: Connection closed by 4.153.228.146 port 48186 Jan 28 04:11:07.976619 sshd-session[1904]: pam_unix(sshd:session): session closed for user core Jan 28 04:11:07.979000 audit[1904]: USER_END pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:11:07.991520 kernel: audit: type=1106 audit(1769573467.979:229): pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:11:07.991609 kernel: audit: type=1104 audit(1769573467.980:230): pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:11:07.980000 audit[1904]: CRED_DISP pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:11:07.989310 systemd-logind[1617]: Session 11 logged out. Waiting for processes to exit. Jan 28 04:11:07.990872 systemd[1]: sshd@7-10.230.66.102:22-4.153.228.146:48186.service: Deactivated successfully. Jan 28 04:11:07.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.66.102:22-4.153.228.146:48186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.997111 kernel: audit: type=1131 audit(1769573467.992:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.66.102:22-4.153.228.146:48186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:07.995838 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 04:11:08.000735 systemd-logind[1617]: Removed session 11. Jan 28 04:11:08.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.66.102:22-4.153.228.146:48200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:08.091444 systemd[1]: Started sshd@8-10.230.66.102:22-4.153.228.146:48200.service - OpenSSH per-connection server daemon (4.153.228.146:48200). Jan 28 04:11:08.607000 audit[1943]: USER_ACCT pid=1943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:11:08.608009 sshd[1943]: Accepted publickey for core from 4.153.228.146 port 48200 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:11:08.609000 audit[1943]: CRED_ACQ pid=1943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:11:08.609000 audit[1943]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc897b8910 a2=3 a3=0 items=0 ppid=1 pid=1943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:08.609000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:11:08.610533 sshd-session[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:11:08.617951 systemd-logind[1617]: New session 12 of user core. Jan 28 04:11:08.638452 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 04:11:08.643000 audit[1943]: USER_START pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:11:08.646000 audit[1947]: CRED_ACQ pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:11:08.795000 audit[1948]: USER_ACCT pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:11:08.795000 audit[1948]: CRED_REFR pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:11:08.795000 audit[1948]: USER_START pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:11:08.795567 sudo[1948]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 04:11:08.796063 sudo[1948]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 04:11:09.562883 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 04:11:09.593753 (dockerd)[1966]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 04:11:10.154656 dockerd[1966]: time="2026-01-28T04:11:10.154003453Z" level=info msg="Starting up" Jan 28 04:11:10.156423 dockerd[1966]: time="2026-01-28T04:11:10.156383961Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 04:11:10.189330 dockerd[1966]: time="2026-01-28T04:11:10.189155572Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 04:11:10.225624 systemd[1]: var-lib-docker-metacopy\x2dcheck3325657289-merged.mount: Deactivated successfully. Jan 28 04:11:10.282866 dockerd[1966]: time="2026-01-28T04:11:10.282793728Z" level=info msg="Loading containers: start." Jan 28 04:11:10.300202 kernel: Initializing XFRM netlink socket Jan 28 04:11:10.398000 audit[2017]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.398000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffefcbdd1e0 a2=0 a3=0 items=0 ppid=1966 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 04:11:10.403000 audit[2019]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.403000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc00219740 a2=0 a3=0 items=0 ppid=1966 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.403000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 04:11:10.406000 audit[2021]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.406000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca06a1230 a2=0 a3=0 items=0 ppid=1966 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.406000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 04:11:10.409000 audit[2023]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.409000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1ab670f0 a2=0 a3=0 items=0 ppid=1966 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.409000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 04:11:10.412000 audit[2025]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.412000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa1e232a0 a2=0 a3=0 items=0 ppid=1966 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 04:11:10.415000 audit[2027]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.415000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe939816d0 a2=0 a3=0 items=0 ppid=1966 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.415000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 04:11:10.418000 audit[2029]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.418000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff53225c30 a2=0 a3=0 items=0 ppid=1966 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 04:11:10.425000 audit[2031]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.425000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff9f527470 a2=0 a3=0 items=0 ppid=1966 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.425000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 04:11:10.467000 audit[2034]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.467000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe6a74aa80 a2=0 a3=0 items=0 ppid=1966 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 04:11:10.471000 audit[2036]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.471000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffee88b0cd0 a2=0 a3=0 items=0 ppid=1966 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.471000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 04:11:10.474000 audit[2038]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.474000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff0e1ba6a0 a2=0 a3=0 items=0 ppid=1966 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.474000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 04:11:10.478000 audit[2040]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.478000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe5087f620 a2=0 a3=0 items=0 ppid=1966 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.478000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 04:11:10.482000 audit[2042]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.482000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdf68beb30 a2=0 a3=0 items=0 ppid=1966 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.482000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 04:11:10.536000 audit[2072]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.536000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe4c14e260 a2=0 a3=0 items=0 ppid=1966 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.536000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 04:11:10.539000 audit[2074]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.539000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe9cd63c00 a2=0 a3=0 items=0 ppid=1966 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.539000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 04:11:10.542000 audit[2076]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.542000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6318b900 a2=0 a3=0 items=0 ppid=1966 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.542000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 04:11:10.545000 audit[2078]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.545000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6051c330 a2=0 a3=0 items=0 ppid=1966 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.545000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 04:11:10.548000 audit[2080]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.548000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef1706640 a2=0 a3=0 items=0 ppid=1966 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.548000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 04:11:10.552000 audit[2082]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.552000 audit[2082]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffb6415c90 a2=0 a3=0 items=0 ppid=1966 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.552000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 04:11:10.555000 audit[2084]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.555000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe48c8e5c0 a2=0 a3=0 items=0 ppid=1966 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.555000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 04:11:10.558000 audit[2086]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.558000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffec2d52af0 a2=0 a3=0 items=0 ppid=1966 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 04:11:10.562000 audit[2088]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.562000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe1d050340 a2=0 a3=0 items=0 ppid=1966 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.562000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 04:11:10.565000 audit[2090]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.565000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe2948dad0 a2=0 a3=0 items=0 ppid=1966 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 04:11:10.568000 audit[2092]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.568000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe618246f0 a2=0 a3=0 items=0 ppid=1966 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.568000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 04:11:10.571000 audit[2094]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.571000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff24e48310 a2=0 a3=0 items=0 ppid=1966 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 04:11:10.574000 audit[2096]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.574000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe56498d70 a2=0 a3=0 items=0 ppid=1966 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 04:11:10.583000 audit[2101]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.583000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc735fcb00 a2=0 a3=0 items=0 ppid=1966 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 04:11:10.586000 audit[2103]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.586000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffeb5c09d0 a2=0 a3=0 items=0 ppid=1966 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 04:11:10.589000 audit[2105]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.589000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd06f944d0 a2=0 a3=0 items=0 ppid=1966 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.589000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 04:11:10.592000 audit[2107]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.592000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe185dabb0 a2=0 a3=0 items=0 ppid=1966 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.592000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 04:11:10.595000 audit[2109]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.595000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffbfcdb3d0 a2=0 a3=0 items=0 ppid=1966 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.595000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 04:11:10.598000 audit[2111]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:10.598000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff6fdb60f0 a2=0 a3=0 items=0 ppid=1966 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 04:11:10.630562 systemd-timesyncd[1518]: Network configuration changed, trying to establish connection. Jan 28 04:11:10.646000 audit[2115]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.646000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd4514a3d0 a2=0 a3=0 items=0 ppid=1966 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 04:11:10.651000 audit[2117]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.651000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd9c44b390 a2=0 a3=0 items=0 ppid=1966 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.651000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 04:11:10.664000 audit[2125]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.664000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff85bb8ba0 a2=0 a3=0 items=0 ppid=1966 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 04:11:10.678000 audit[2131]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.678000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffecfd87790 a2=0 a3=0 items=0 ppid=1966 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 04:11:10.682000 audit[2133]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.682000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffdca06bf40 a2=0 a3=0 items=0 ppid=1966 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 04:11:10.685000 audit[2135]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.685000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc9dcf0fe0 a2=0 a3=0 items=0 ppid=1966 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 04:11:10.689000 audit[2137]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.689000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffce59e640 a2=0 a3=0 items=0 ppid=1966 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 04:11:10.692000 audit[2139]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:10.692000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc394250a0 a2=0 a3=0 items=0 ppid=1966 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:10.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 04:11:10.693568 systemd-networkd[1551]: docker0: Link UP Jan 28 04:11:10.697704 dockerd[1966]: time="2026-01-28T04:11:10.697622172Z" level=info msg="Loading containers: done." Jan 28 04:11:10.732078 dockerd[1966]: time="2026-01-28T04:11:10.732007741Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 04:11:10.732333 dockerd[1966]: time="2026-01-28T04:11:10.732148830Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 04:11:10.732333 dockerd[1966]: time="2026-01-28T04:11:10.732309413Z" level=info msg="Initializing buildkit" Jan 28 04:11:10.760449 dockerd[1966]: time="2026-01-28T04:11:10.760390988Z" level=info msg="Completed buildkit initialization" Jan 28 04:11:10.770350 dockerd[1966]: time="2026-01-28T04:11:10.770252184Z" level=info msg="Daemon has completed initialization" Jan 28 04:11:10.770520 dockerd[1966]: time="2026-01-28T04:11:10.770368003Z" level=info msg="API listen on /run/docker.sock" Jan 28 04:11:10.771227 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 04:11:10.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:12.041144 systemd-resolved[1307]: Clock change detected. Flushing caches. Jan 28 04:11:12.042129 systemd-timesyncd[1518]: Contacted time server [2a00:1098:0:86:1000:67:0:1]:123 (2.flatcar.pool.ntp.org). Jan 28 04:11:12.042232 systemd-timesyncd[1518]: Initial clock synchronization to Wed 2026-01-28 04:11:12.040835 UTC. Jan 28 04:11:12.138179 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2696894169-merged.mount: Deactivated successfully. Jan 28 04:11:13.025090 containerd[1648]: time="2026-01-28T04:11:13.023611947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 04:11:13.662475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 04:11:13.667700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:11:14.024890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:14.031283 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 28 04:11:14.031508 kernel: audit: type=1130 audit(1769573474.024:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:14.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:14.036792 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 04:11:14.149245 kubelet[2188]: E0128 04:11:14.148902 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 04:11:14.154203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 04:11:14.154523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 04:11:14.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 04:11:14.155670 systemd[1]: kubelet.service: Consumed 353ms CPU time, 108.6M memory peak. Jan 28 04:11:14.160351 kernel: audit: type=1131 audit(1769573474.154:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 04:11:14.899643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558848266.mount: Deactivated successfully. Jan 28 04:11:18.423033 containerd[1648]: time="2026-01-28T04:11:18.422826675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:18.425591 containerd[1648]: time="2026-01-28T04:11:18.425520585Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27945977" Jan 28 04:11:18.426852 containerd[1648]: time="2026-01-28T04:11:18.426734213Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:18.435071 containerd[1648]: time="2026-01-28T04:11:18.434162744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:18.436243 containerd[1648]: time="2026-01-28T04:11:18.435699791Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 5.411851229s" Jan 28 04:11:18.436243 containerd[1648]: time="2026-01-28T04:11:18.435803517Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 04:11:18.438090 containerd[1648]: time="2026-01-28T04:11:18.438033415Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 04:11:20.693396 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 28 04:11:20.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:20.705279 kernel: audit: type=1131 audit(1769573480.692:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:20.722000 audit: BPF prog-id=61 op=UNLOAD Jan 28 04:11:20.725290 kernel: audit: type=1334 audit(1769573480.722:285): prog-id=61 op=UNLOAD Jan 28 04:11:21.258034 containerd[1648]: time="2026-01-28T04:11:21.257965135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:21.259680 containerd[1648]: time="2026-01-28T04:11:21.259396365Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 28 04:11:21.260480 containerd[1648]: time="2026-01-28T04:11:21.260439716Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:21.264059 containerd[1648]: time="2026-01-28T04:11:21.264021846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:21.265684 containerd[1648]: time="2026-01-28T04:11:21.265629870Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.827546568s" Jan 28 04:11:21.265809 containerd[1648]: time="2026-01-28T04:11:21.265687478Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 04:11:21.267093 containerd[1648]: time="2026-01-28T04:11:21.266873476Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 04:11:24.162516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 04:11:24.167461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:11:24.810328 containerd[1648]: time="2026-01-28T04:11:24.810118552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:24.813970 containerd[1648]: time="2026-01-28T04:11:24.812523816Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 28 04:11:24.813970 containerd[1648]: time="2026-01-28T04:11:24.813316995Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:24.817918 containerd[1648]: time="2026-01-28T04:11:24.817846279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:24.819427 containerd[1648]: time="2026-01-28T04:11:24.819388336Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.552474334s" Jan 28 04:11:24.819513 containerd[1648]: time="2026-01-28T04:11:24.819430865Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 04:11:24.820204 containerd[1648]: time="2026-01-28T04:11:24.820131265Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 04:11:24.827844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:24.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:24.848289 kernel: audit: type=1130 audit(1769573484.826:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:24.857923 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 04:11:24.949982 kubelet[2270]: E0128 04:11:24.949882 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 04:11:24.953099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 04:11:24.953377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 04:11:24.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 04:11:24.954366 systemd[1]: kubelet.service: Consumed 392ms CPU time, 107.8M memory peak. Jan 28 04:11:24.958288 kernel: audit: type=1131 audit(1769573484.953:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 04:11:27.806807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922470280.mount: Deactivated successfully. Jan 28 04:11:28.966848 containerd[1648]: time="2026-01-28T04:11:28.966715821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:28.968841 containerd[1648]: time="2026-01-28T04:11:28.968761011Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 28 04:11:28.969926 containerd[1648]: time="2026-01-28T04:11:28.969870068Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:28.978299 containerd[1648]: time="2026-01-28T04:11:28.976892862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:28.978299 containerd[1648]: time="2026-01-28T04:11:28.976907688Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 4.156732881s" Jan 28 04:11:28.978299 containerd[1648]: time="2026-01-28T04:11:28.977548650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 04:11:28.979785 containerd[1648]: time="2026-01-28T04:11:28.979739774Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 04:11:29.905335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772601603.mount: Deactivated successfully. Jan 28 04:11:31.482317 containerd[1648]: time="2026-01-28T04:11:31.482233452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:31.483782 containerd[1648]: time="2026-01-28T04:11:31.483713570Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 28 04:11:31.484597 containerd[1648]: time="2026-01-28T04:11:31.484507074Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:31.489295 containerd[1648]: time="2026-01-28T04:11:31.488343744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:31.491017 containerd[1648]: time="2026-01-28T04:11:31.490477710Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.51036158s" Jan 28 04:11:31.491017 containerd[1648]: time="2026-01-28T04:11:31.490522892Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 04:11:31.491437 containerd[1648]: time="2026-01-28T04:11:31.491409261Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 04:11:32.621631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228572581.mount: Deactivated successfully. Jan 28 04:11:32.663766 containerd[1648]: time="2026-01-28T04:11:32.663655540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 04:11:32.666492 containerd[1648]: time="2026-01-28T04:11:32.666439751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 04:11:32.667330 containerd[1648]: time="2026-01-28T04:11:32.667252742Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 04:11:32.671720 containerd[1648]: time="2026-01-28T04:11:32.671619535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 04:11:32.673292 containerd[1648]: time="2026-01-28T04:11:32.672801940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.181232228s" Jan 28 04:11:32.673292 containerd[1648]: time="2026-01-28T04:11:32.672875710Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 04:11:32.674251 containerd[1648]: time="2026-01-28T04:11:32.673927626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 04:11:34.376933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081059083.mount: Deactivated successfully. Jan 28 04:11:35.113977 update_engine[1618]: I20260128 04:11:35.113672 1618 update_attempter.cc:509] Updating boot flags... Jan 28 04:11:35.133482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 04:11:35.141609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:11:35.606642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:35.628935 kernel: audit: type=1130 audit(1769573495.605:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:35.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:35.634845 (kubelet)[2414]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 04:11:35.756929 kubelet[2414]: E0128 04:11:35.756824 2414 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 04:11:35.760463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 04:11:35.760816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 04:11:35.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 04:11:35.761581 systemd[1]: kubelet.service: Consumed 320ms CPU time, 110.7M memory peak. Jan 28 04:11:35.766294 kernel: audit: type=1131 audit(1769573495.760:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 04:11:41.673229 containerd[1648]: time="2026-01-28T04:11:41.672895336Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55729261" Jan 28 04:11:41.674077 containerd[1648]: time="2026-01-28T04:11:41.673968998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:41.678283 containerd[1648]: time="2026-01-28T04:11:41.677616755Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:41.680422 containerd[1648]: time="2026-01-28T04:11:41.680389371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:11:41.681897 containerd[1648]: time="2026-01-28T04:11:41.681861732Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 9.007894056s" Jan 28 04:11:41.682022 containerd[1648]: time="2026-01-28T04:11:41.681995496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 04:11:45.157329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:45.157765 systemd[1]: kubelet.service: Consumed 320ms CPU time, 110.7M memory peak. Jan 28 04:11:45.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:45.169844 kernel: audit: type=1130 audit(1769573505.156:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:45.169972 kernel: audit: type=1131 audit(1769573505.156:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:45.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:45.166653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:11:45.216815 systemd[1]: Reload requested from client PID 2453 ('systemctl') (unit session-12.scope)... Jan 28 04:11:45.216873 systemd[1]: Reloading... Jan 28 04:11:45.397403 zram_generator::config[2497]: No configuration found. Jan 28 04:11:46.018333 systemd[1]: Reloading finished in 800 ms. Jan 28 04:11:46.058000 audit: BPF prog-id=65 op=LOAD Jan 28 04:11:46.066323 kernel: audit: type=1334 audit(1769573506.058:292): prog-id=65 op=LOAD Jan 28 04:11:46.064000 audit: BPF prog-id=51 op=UNLOAD Jan 28 04:11:46.070459 kernel: audit: type=1334 audit(1769573506.064:293): prog-id=51 op=UNLOAD Jan 28 04:11:46.070556 kernel: audit: type=1334 audit(1769573506.064:294): prog-id=66 op=LOAD Jan 28 04:11:46.064000 audit: BPF prog-id=66 op=LOAD Jan 28 04:11:46.072016 kernel: audit: type=1334 audit(1769573506.064:295): prog-id=67 op=LOAD Jan 28 04:11:46.064000 audit: BPF prog-id=67 op=LOAD Jan 28 04:11:46.073464 kernel: audit: type=1334 audit(1769573506.064:296): prog-id=52 op=UNLOAD Jan 28 04:11:46.064000 audit: BPF prog-id=52 op=UNLOAD Jan 28 04:11:46.075149 kernel: audit: type=1334 audit(1769573506.064:297): prog-id=53 op=UNLOAD Jan 28 04:11:46.064000 audit: BPF prog-id=53 op=UNLOAD Jan 28 04:11:46.070000 audit: BPF prog-id=68 op=LOAD Jan 28 04:11:46.078121 kernel: audit: type=1334 audit(1769573506.070:298): prog-id=68 op=LOAD Jan 28 04:11:46.078224 kernel: audit: type=1334 audit(1769573506.070:299): prog-id=47 op=UNLOAD Jan 28 04:11:46.070000 audit: BPF prog-id=47 op=UNLOAD Jan 28 04:11:46.074000 audit: BPF prog-id=69 op=LOAD Jan 28 04:11:46.074000 audit: BPF prog-id=58 op=UNLOAD Jan 28 04:11:46.074000 audit: BPF prog-id=70 op=LOAD Jan 28 04:11:46.074000 audit: BPF prog-id=71 op=LOAD Jan 28 04:11:46.074000 audit: BPF prog-id=59 op=UNLOAD Jan 28 04:11:46.074000 audit: BPF prog-id=60 op=UNLOAD Jan 28 04:11:46.104000 audit: BPF prog-id=72 op=LOAD Jan 28 04:11:46.104000 audit: BPF prog-id=57 op=UNLOAD Jan 28 04:11:46.106000 audit: BPF prog-id=73 op=LOAD Jan 28 04:11:46.106000 audit: BPF prog-id=41 op=UNLOAD Jan 28 04:11:46.106000 audit: BPF prog-id=74 op=LOAD Jan 28 04:11:46.106000 audit: BPF prog-id=75 op=LOAD Jan 28 04:11:46.106000 audit: BPF prog-id=42 op=UNLOAD Jan 28 04:11:46.106000 audit: BPF prog-id=43 op=UNLOAD Jan 28 04:11:46.107000 audit: BPF prog-id=76 op=LOAD Jan 28 04:11:46.107000 audit: BPF prog-id=48 op=UNLOAD Jan 28 04:11:46.107000 audit: BPF prog-id=77 op=LOAD Jan 28 04:11:46.107000 audit: BPF prog-id=78 op=LOAD Jan 28 04:11:46.107000 audit: BPF prog-id=49 op=UNLOAD Jan 28 04:11:46.107000 audit: BPF prog-id=50 op=UNLOAD Jan 28 04:11:46.108000 audit: BPF prog-id=79 op=LOAD Jan 28 04:11:46.108000 audit: BPF prog-id=44 op=UNLOAD Jan 28 04:11:46.108000 audit: BPF prog-id=80 op=LOAD Jan 28 04:11:46.108000 audit: BPF prog-id=81 op=LOAD Jan 28 04:11:46.108000 audit: BPF prog-id=45 op=UNLOAD Jan 28 04:11:46.108000 audit: BPF prog-id=46 op=UNLOAD Jan 28 04:11:46.110000 audit: BPF prog-id=82 op=LOAD Jan 28 04:11:46.110000 audit: BPF prog-id=64 op=UNLOAD Jan 28 04:11:46.111000 audit: BPF prog-id=83 op=LOAD Jan 28 04:11:46.111000 audit: BPF prog-id=84 op=LOAD Jan 28 04:11:46.111000 audit: BPF prog-id=54 op=UNLOAD Jan 28 04:11:46.111000 audit: BPF prog-id=55 op=UNLOAD Jan 28 04:11:46.112000 audit: BPF prog-id=85 op=LOAD Jan 28 04:11:46.112000 audit: BPF prog-id=56 op=UNLOAD Jan 28 04:11:46.140551 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 04:11:46.141174 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 04:11:46.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 04:11:46.141922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:46.142375 systemd[1]: kubelet.service: Consumed 177ms CPU time, 98.6M memory peak. Jan 28 04:11:46.146816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:11:46.343555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:46.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:46.355748 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 04:11:46.423715 kubelet[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 04:11:46.423715 kubelet[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 04:11:46.423715 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 04:11:46.424353 kubelet[2569]: I0128 04:11:46.423784 2569 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 04:11:46.934905 kubelet[2569]: I0128 04:11:46.934855 2569 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 04:11:46.934905 kubelet[2569]: I0128 04:11:46.934899 2569 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 04:11:46.935297 kubelet[2569]: I0128 04:11:46.935276 2569 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 04:11:46.979053 kubelet[2569]: E0128 04:11:46.978936 2569 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.66.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:46.982069 kubelet[2569]: I0128 04:11:46.981770 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 04:11:47.001187 kubelet[2569]: I0128 04:11:47.001149 2569 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 04:11:47.011198 kubelet[2569]: I0128 04:11:47.010668 2569 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 04:11:47.012903 kubelet[2569]: I0128 04:11:47.012846 2569 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 04:11:47.014409 kubelet[2569]: I0128 04:11:47.013005 2569 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3avyi.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 04:11:47.017311 kubelet[2569]: I0128 04:11:47.017285 2569 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 04:11:47.017434 kubelet[2569]: I0128 04:11:47.017416 2569 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 04:11:47.018853 kubelet[2569]: I0128 04:11:47.018831 2569 state_mem.go:36] "Initialized new in-memory state store" Jan 28 04:11:47.024114 kubelet[2569]: I0128 04:11:47.023903 2569 kubelet.go:446] "Attempting to sync node with API server" Jan 28 04:11:47.024114 kubelet[2569]: I0128 04:11:47.023989 2569 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 04:11:47.025318 kubelet[2569]: I0128 04:11:47.025295 2569 kubelet.go:352] "Adding apiserver pod source" Jan 28 04:11:47.025467 kubelet[2569]: I0128 04:11:47.025446 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 04:11:47.029426 kubelet[2569]: W0128 04:11:47.028903 2569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.66.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3avyi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.66.102:6443: connect: connection refused Jan 28 04:11:47.029426 kubelet[2569]: E0128 04:11:47.028994 2569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.66.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3avyi.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:47.030175 kubelet[2569]: I0128 04:11:47.030143 2569 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 04:11:47.033254 kubelet[2569]: I0128 04:11:47.033224 2569 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 04:11:47.033517 kubelet[2569]: W0128 04:11:47.033494 2569 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 04:11:47.034722 kubelet[2569]: I0128 04:11:47.034692 2569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 04:11:47.034880 kubelet[2569]: I0128 04:11:47.034860 2569 server.go:1287] "Started kubelet" Jan 28 04:11:47.035229 kubelet[2569]: W0128 04:11:47.035149 2569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.66.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.66.102:6443: connect: connection refused Jan 28 04:11:47.035391 kubelet[2569]: E0128 04:11:47.035352 2569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.66.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:47.058287 kubelet[2569]: E0128 04:11:47.053439 2569 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.66.102:6443/api/v1/namespaces/default/events\": dial tcp 10.230.66.102:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-3avyi.gb1.brightbox.com.188ec9baa2c0c268 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-3avyi.gb1.brightbox.com,UID:srv-3avyi.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-3avyi.gb1.brightbox.com,},FirstTimestamp:2026-01-28 04:11:47.03482532 +0000 UTC m=+0.673899833,LastTimestamp:2026-01-28 04:11:47.03482532 +0000 UTC m=+0.673899833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-3avyi.gb1.brightbox.com,}" Jan 28 04:11:47.058287 kubelet[2569]: I0128 04:11:47.057150 2569 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 04:11:47.060620 kubelet[2569]: I0128 04:11:47.060583 2569 server.go:479] "Adding debug handlers to kubelet server" Jan 28 04:11:47.061694 kubelet[2569]: I0128 04:11:47.061668 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 04:11:47.062310 kubelet[2569]: I0128 04:11:47.062215 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 04:11:47.062603 kubelet[2569]: I0128 04:11:47.062574 2569 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 04:11:47.064389 kubelet[2569]: I0128 04:11:47.064358 2569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 04:11:47.068000 audit[2581]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:47.068000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe2bda4260 a2=0 a3=0 items=0 ppid=2569 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.068000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 04:11:47.071019 kubelet[2569]: I0128 04:11:47.070185 2569 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 04:11:47.074954 kubelet[2569]: E0128 04:11:47.074908 2569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-3avyi.gb1.brightbox.com\" not found" Jan 28 04:11:47.075045 kubelet[2569]: I0128 04:11:47.075012 2569 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 04:11:47.075218 kubelet[2569]: I0128 04:11:47.075109 2569 reconciler.go:26] "Reconciler: start to sync state" Jan 28 04:11:47.075386 kubelet[2569]: E0128 04:11:47.075337 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3avyi.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.102:6443: connect: connection refused" interval="200ms" Jan 28 04:11:47.076000 audit[2582]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:47.076000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5442eb20 a2=0 a3=0 items=0 ppid=2569 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 04:11:47.079134 kubelet[2569]: W0128 04:11:47.079006 2569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.66.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.66.102:6443: connect: connection refused Jan 28 04:11:47.079134 kubelet[2569]: E0128 04:11:47.079079 2569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.66.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:47.082393 kubelet[2569]: I0128 04:11:47.082357 2569 factory.go:221] Registration of the systemd container factory successfully Jan 28 04:11:47.082721 kubelet[2569]: E0128 04:11:47.082608 2569 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 04:11:47.082829 kubelet[2569]: I0128 04:11:47.082727 2569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 04:11:47.084000 audit[2584]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:47.086683 kubelet[2569]: I0128 04:11:47.086660 2569 factory.go:221] Registration of the containerd container factory successfully Jan 28 04:11:47.084000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffe48644a0 a2=0 a3=0 items=0 ppid=2569 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 04:11:47.094000 audit[2586]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:47.094000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd2b0eb270 a2=0 a3=0 items=0 ppid=2569 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 04:11:47.118000 audit[2593]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:47.118000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff2d370c40 a2=0 a3=0 items=0 ppid=2569 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 04:11:47.121371 kubelet[2569]: I0128 04:11:47.120567 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 04:11:47.122650 kubelet[2569]: I0128 04:11:47.122623 2569 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 04:11:47.122650 kubelet[2569]: I0128 04:11:47.122646 2569 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 04:11:47.122777 kubelet[2569]: I0128 04:11:47.122680 2569 state_mem.go:36] "Initialized new in-memory state store" Jan 28 04:11:47.122000 audit[2594]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:47.122000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe952fe170 a2=0 a3=0 items=0 ppid=2569 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 04:11:47.123000 audit[2595]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:47.123000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8e210bf0 a2=0 a3=0 items=0 ppid=2569 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 04:11:47.126350 kubelet[2569]: I0128 04:11:47.125043 2569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 04:11:47.126350 kubelet[2569]: I0128 04:11:47.125091 2569 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 04:11:47.126350 kubelet[2569]: I0128 04:11:47.125130 2569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 04:11:47.126350 kubelet[2569]: I0128 04:11:47.125143 2569 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 04:11:47.126350 kubelet[2569]: E0128 04:11:47.125216 2569 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 04:11:47.128285 kubelet[2569]: I0128 04:11:47.128215 2569 policy_none.go:49] "None policy: Start" Jan 28 04:11:47.128285 kubelet[2569]: I0128 04:11:47.128253 2569 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 04:11:47.128417 kubelet[2569]: I0128 04:11:47.128307 2569 state_mem.go:35] "Initializing new in-memory state store" Jan 28 04:11:47.127000 audit[2597]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:47.127000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4d8e07b0 a2=0 a3=0 items=0 ppid=2569 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 04:11:47.129553 kubelet[2569]: W0128 04:11:47.129501 2569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.66.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.66.102:6443: connect: connection refused Jan 28 04:11:47.129646 kubelet[2569]: E0128 04:11:47.129567 2569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.66.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:47.130000 audit[2599]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:47.130000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff55c740f0 a2=0 a3=0 items=0 ppid=2569 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.130000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 04:11:47.131000 audit[2598]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:47.131000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1d147f70 a2=0 a3=0 items=0 ppid=2569 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 04:11:47.133000 audit[2601]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:11:47.133000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd456002c0 a2=0 a3=0 items=0 ppid=2569 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 04:11:47.135000 audit[2602]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:11:47.135000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe44498180 a2=0 a3=0 items=0 ppid=2569 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.135000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 04:11:47.142734 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 04:11:47.155033 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 04:11:47.161285 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 04:11:47.175276 kubelet[2569]: E0128 04:11:47.175204 2569 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-3avyi.gb1.brightbox.com\" not found" Jan 28 04:11:47.177923 kubelet[2569]: I0128 04:11:47.177896 2569 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 04:11:47.178443 kubelet[2569]: I0128 04:11:47.178413 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 04:11:47.178759 kubelet[2569]: I0128 04:11:47.178581 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 04:11:47.180190 kubelet[2569]: I0128 04:11:47.180144 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 04:11:47.183620 kubelet[2569]: E0128 04:11:47.183567 2569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 04:11:47.183901 kubelet[2569]: E0128 04:11:47.183794 2569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-3avyi.gb1.brightbox.com\" not found" Jan 28 04:11:47.241454 systemd[1]: Created slice kubepods-burstable-pod11053e522131eb66054c9b582af885d5.slice - libcontainer container kubepods-burstable-pod11053e522131eb66054c9b582af885d5.slice. Jan 28 04:11:47.263996 kubelet[2569]: E0128 04:11:47.263865 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.271024 systemd[1]: Created slice kubepods-burstable-poda3de0aba95ea39af80e6d3ceefa973fe.slice - libcontainer container kubepods-burstable-poda3de0aba95ea39af80e6d3ceefa973fe.slice. Jan 28 04:11:47.276599 kubelet[2569]: E0128 04:11:47.276563 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.277140 kubelet[2569]: E0128 04:11:47.276974 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3avyi.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.102:6443: connect: connection refused" interval="400ms" Jan 28 04:11:47.279180 systemd[1]: Created slice kubepods-burstable-poddc9e7e5a51a7f27d45b06bb0c9a8a0d6.slice - libcontainer container kubepods-burstable-poddc9e7e5a51a7f27d45b06bb0c9a8a0d6.slice. Jan 28 04:11:47.281554 kubelet[2569]: I0128 04:11:47.281515 2569 kubelet_node_status.go:75] "Attempting to register node" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.282657 kubelet[2569]: E0128 04:11:47.282598 2569 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.102:6443/api/v1/nodes\": dial tcp 10.230.66.102:6443: connect: connection refused" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.284069 kubelet[2569]: E0128 04:11:47.283831 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.375800 kubelet[2569]: I0128 04:11:47.375730 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-ca-certs\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.376063 kubelet[2569]: I0128 04:11:47.376034 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-k8s-certs\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.376225 kubelet[2569]: I0128 04:11:47.376198 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.376407 kubelet[2569]: I0128 04:11:47.376381 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3de0aba95ea39af80e6d3ceefa973fe-ca-certs\") pod \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" (UID: \"a3de0aba95ea39af80e6d3ceefa973fe\") " pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.376721 kubelet[2569]: I0128 04:11:47.376539 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3de0aba95ea39af80e6d3ceefa973fe-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" (UID: \"a3de0aba95ea39af80e6d3ceefa973fe\") " pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.376721 kubelet[2569]: I0128 04:11:47.376578 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-flexvolume-dir\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.376721 kubelet[2569]: I0128 04:11:47.376608 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-kubeconfig\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.376721 kubelet[2569]: I0128 04:11:47.376641 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc9e7e5a51a7f27d45b06bb0c9a8a0d6-kubeconfig\") pod \"kube-scheduler-srv-3avyi.gb1.brightbox.com\" (UID: \"dc9e7e5a51a7f27d45b06bb0c9a8a0d6\") " pod="kube-system/kube-scheduler-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.376721 kubelet[2569]: I0128 04:11:47.376667 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3de0aba95ea39af80e6d3ceefa973fe-k8s-certs\") pod \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" (UID: \"a3de0aba95ea39af80e6d3ceefa973fe\") " pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.486133 kubelet[2569]: I0128 04:11:47.486058 2569 kubelet_node_status.go:75] "Attempting to register node" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.487292 kubelet[2569]: E0128 04:11:47.487221 2569 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.102:6443/api/v1/nodes\": dial tcp 10.230.66.102:6443: connect: connection refused" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.568880 containerd[1648]: time="2026-01-28T04:11:47.568487033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3avyi.gb1.brightbox.com,Uid:11053e522131eb66054c9b582af885d5,Namespace:kube-system,Attempt:0,}" Jan 28 04:11:47.578501 containerd[1648]: time="2026-01-28T04:11:47.578280246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3avyi.gb1.brightbox.com,Uid:a3de0aba95ea39af80e6d3ceefa973fe,Namespace:kube-system,Attempt:0,}" Jan 28 04:11:47.585411 containerd[1648]: time="2026-01-28T04:11:47.585161848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3avyi.gb1.brightbox.com,Uid:dc9e7e5a51a7f27d45b06bb0c9a8a0d6,Namespace:kube-system,Attempt:0,}" Jan 28 04:11:47.679603 kubelet[2569]: E0128 04:11:47.679549 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3avyi.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.102:6443: connect: connection refused" interval="800ms" Jan 28 04:11:47.722063 containerd[1648]: time="2026-01-28T04:11:47.721984802Z" level=info msg="connecting to shim 90e9987d65fe34bc7f576dd1f59b695f32fe106429da9926bb2f02b0e26b4b25" address="unix:///run/containerd/s/cb2134bbd181039b1adc8b4d32d28c324104d4fcdf28b7e919982c530ef6fef3" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:11:47.731297 containerd[1648]: time="2026-01-28T04:11:47.730711482Z" level=info msg="connecting to shim a3ad4b7d7a868e3aeee980c1a2fbd010325dac2abcc7d4cfdff852856b98cb88" address="unix:///run/containerd/s/6bf580797ff79fcd3bb2af7398f4dcaed2ece1e8cf8d8d357f2d22c10b54a19e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:11:47.732209 containerd[1648]: time="2026-01-28T04:11:47.732167009Z" level=info msg="connecting to shim 15f57ae44c27bb8ae300f25d5199cdbdd470d1356e614c4482f92e4228e4a735" address="unix:///run/containerd/s/a5336fba950ac5d5f0acf0fb28ca2ea06c2bcf8d86e49be597b324e8472a649b" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:11:47.864690 systemd[1]: Started cri-containerd-15f57ae44c27bb8ae300f25d5199cdbdd470d1356e614c4482f92e4228e4a735.scope - libcontainer container 15f57ae44c27bb8ae300f25d5199cdbdd470d1356e614c4482f92e4228e4a735. Jan 28 04:11:47.868005 systemd[1]: Started cri-containerd-90e9987d65fe34bc7f576dd1f59b695f32fe106429da9926bb2f02b0e26b4b25.scope - libcontainer container 90e9987d65fe34bc7f576dd1f59b695f32fe106429da9926bb2f02b0e26b4b25. Jan 28 04:11:47.871303 systemd[1]: Started cri-containerd-a3ad4b7d7a868e3aeee980c1a2fbd010325dac2abcc7d4cfdff852856b98cb88.scope - libcontainer container a3ad4b7d7a868e3aeee980c1a2fbd010325dac2abcc7d4cfdff852856b98cb88. Jan 28 04:11:47.891032 kubelet[2569]: I0128 04:11:47.890992 2569 kubelet_node_status.go:75] "Attempting to register node" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.892036 kubelet[2569]: E0128 04:11:47.891961 2569 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.102:6443/api/v1/nodes\": dial tcp 10.230.66.102:6443: connect: connection refused" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:47.917000 audit: BPF prog-id=86 op=LOAD Jan 28 04:11:47.920000 audit: BPF prog-id=87 op=LOAD Jan 28 04:11:47.920000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2622 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653939383764363566653334626337663537366464316635396236 Jan 28 04:11:47.921000 audit: BPF prog-id=87 op=UNLOAD Jan 28 04:11:47.921000 audit[2661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653939383764363566653334626337663537366464316635396236 Jan 28 04:11:47.923000 audit: BPF prog-id=88 op=LOAD Jan 28 04:11:47.926000 audit: BPF prog-id=89 op=LOAD Jan 28 04:11:47.926000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2622 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653939383764363566653334626337663537366464316635396236 Jan 28 04:11:47.926000 audit: BPF prog-id=90 op=LOAD Jan 28 04:11:47.926000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2622 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653939383764363566653334626337663537366464316635396236 Jan 28 04:11:47.926000 audit: BPF prog-id=90 op=UNLOAD Jan 28 04:11:47.926000 audit[2661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653939383764363566653334626337663537366464316635396236 Jan 28 04:11:47.926000 audit: BPF prog-id=89 op=UNLOAD Jan 28 04:11:47.926000 audit[2661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653939383764363566653334626337663537366464316635396236 Jan 28 04:11:47.926000 audit: BPF prog-id=91 op=LOAD Jan 28 04:11:47.926000 audit[2661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2622 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930653939383764363566653334626337663537366464316635396236 Jan 28 04:11:47.927000 audit: BPF prog-id=92 op=LOAD Jan 28 04:11:47.927000 audit: BPF prog-id=93 op=LOAD Jan 28 04:11:47.927000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2630 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663537616534346332376262386165333030663235643531393963 Jan 28 04:11:47.927000 audit: BPF prog-id=93 op=UNLOAD Jan 28 04:11:47.927000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663537616534346332376262386165333030663235643531393963 Jan 28 04:11:47.932000 audit: BPF prog-id=94 op=LOAD Jan 28 04:11:47.932000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2630 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663537616534346332376262386165333030663235643531393963 Jan 28 04:11:47.933000 audit: BPF prog-id=95 op=LOAD Jan 28 04:11:47.933000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2630 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663537616534346332376262386165333030663235643531393963 Jan 28 04:11:47.933000 audit: BPF prog-id=95 op=UNLOAD Jan 28 04:11:47.933000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663537616534346332376262386165333030663235643531393963 Jan 28 04:11:47.933000 audit: BPF prog-id=94 op=UNLOAD Jan 28 04:11:47.933000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663537616534346332376262386165333030663235643531393963 Jan 28 04:11:47.933000 audit: BPF prog-id=96 op=LOAD Jan 28 04:11:47.933000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2630 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135663537616534346332376262386165333030663235643531393963 Jan 28 04:11:47.933000 audit: BPF prog-id=97 op=LOAD Jan 28 04:11:47.933000 audit[2659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2628 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133616434623764376138363865336165656539383063316132666264 Jan 28 04:11:47.934000 audit: BPF prog-id=97 op=UNLOAD Jan 28 04:11:47.934000 audit[2659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133616434623764376138363865336165656539383063316132666264 Jan 28 04:11:47.935000 audit: BPF prog-id=98 op=LOAD Jan 28 04:11:47.935000 audit[2659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2628 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133616434623764376138363865336165656539383063316132666264 Jan 28 04:11:47.936000 audit: BPF prog-id=99 op=LOAD Jan 28 04:11:47.936000 audit[2659]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2628 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133616434623764376138363865336165656539383063316132666264 Jan 28 04:11:47.938000 audit: BPF prog-id=99 op=UNLOAD Jan 28 04:11:47.938000 audit[2659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133616434623764376138363865336165656539383063316132666264 Jan 28 04:11:47.938000 audit: BPF prog-id=98 op=UNLOAD Jan 28 04:11:47.938000 audit[2659]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133616434623764376138363865336165656539383063316132666264 Jan 28 04:11:47.938000 audit: BPF prog-id=100 op=LOAD Jan 28 04:11:47.938000 audit[2659]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2628 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:47.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133616434623764376138363865336165656539383063316132666264 Jan 28 04:11:47.942108 kubelet[2569]: W0128 04:11:47.932283 2569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.66.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3avyi.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.66.102:6443: connect: connection refused Jan 28 04:11:47.942108 kubelet[2569]: E0128 04:11:47.932426 2569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.66.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3avyi.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:48.032023 containerd[1648]: time="2026-01-28T04:11:48.031922449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3avyi.gb1.brightbox.com,Uid:11053e522131eb66054c9b582af885d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3ad4b7d7a868e3aeee980c1a2fbd010325dac2abcc7d4cfdff852856b98cb88\"" Jan 28 04:11:48.040591 containerd[1648]: time="2026-01-28T04:11:48.040503832Z" level=info msg="CreateContainer within sandbox \"a3ad4b7d7a868e3aeee980c1a2fbd010325dac2abcc7d4cfdff852856b98cb88\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 04:11:48.048170 containerd[1648]: time="2026-01-28T04:11:48.048128087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3avyi.gb1.brightbox.com,Uid:dc9e7e5a51a7f27d45b06bb0c9a8a0d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"90e9987d65fe34bc7f576dd1f59b695f32fe106429da9926bb2f02b0e26b4b25\"" Jan 28 04:11:48.052667 containerd[1648]: time="2026-01-28T04:11:48.052628205Z" level=info msg="CreateContainer within sandbox \"90e9987d65fe34bc7f576dd1f59b695f32fe106429da9926bb2f02b0e26b4b25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 04:11:48.057360 containerd[1648]: time="2026-01-28T04:11:48.057244666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3avyi.gb1.brightbox.com,Uid:a3de0aba95ea39af80e6d3ceefa973fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"15f57ae44c27bb8ae300f25d5199cdbdd470d1356e614c4482f92e4228e4a735\"" Jan 28 04:11:48.062454 containerd[1648]: time="2026-01-28T04:11:48.062379395Z" level=info msg="CreateContainer within sandbox \"15f57ae44c27bb8ae300f25d5199cdbdd470d1356e614c4482f92e4228e4a735\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 04:11:48.062836 containerd[1648]: time="2026-01-28T04:11:48.062767428Z" level=info msg="Container 8e1a5a3d6db370df3257e564ae023bb444dd9aab3ecbda4363bfc7578d2566f6: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:11:48.070315 containerd[1648]: time="2026-01-28T04:11:48.070281977Z" level=info msg="Container f1c50ec1a9c7fa8ea2f436f430b24c744ca47e6278b67712b1512b7c1959525c: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:11:48.076719 containerd[1648]: time="2026-01-28T04:11:48.076674660Z" level=info msg="CreateContainer within sandbox \"a3ad4b7d7a868e3aeee980c1a2fbd010325dac2abcc7d4cfdff852856b98cb88\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e1a5a3d6db370df3257e564ae023bb444dd9aab3ecbda4363bfc7578d2566f6\"" Jan 28 04:11:48.078811 containerd[1648]: time="2026-01-28T04:11:48.078776032Z" level=info msg="CreateContainer within sandbox \"90e9987d65fe34bc7f576dd1f59b695f32fe106429da9926bb2f02b0e26b4b25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1c50ec1a9c7fa8ea2f436f430b24c744ca47e6278b67712b1512b7c1959525c\"" Jan 28 04:11:48.079739 containerd[1648]: time="2026-01-28T04:11:48.079708116Z" level=info msg="Container b055a9b67baeebfe6ae481d9ce8d6a14ee9618ab5ec818b3bcb5f36e0e4ad24b: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:11:48.080719 containerd[1648]: time="2026-01-28T04:11:48.080689182Z" level=info msg="StartContainer for \"f1c50ec1a9c7fa8ea2f436f430b24c744ca47e6278b67712b1512b7c1959525c\"" Jan 28 04:11:48.084448 containerd[1648]: time="2026-01-28T04:11:48.084415618Z" level=info msg="StartContainer for \"8e1a5a3d6db370df3257e564ae023bb444dd9aab3ecbda4363bfc7578d2566f6\"" Jan 28 04:11:48.085702 containerd[1648]: time="2026-01-28T04:11:48.085669078Z" level=info msg="connecting to shim 8e1a5a3d6db370df3257e564ae023bb444dd9aab3ecbda4363bfc7578d2566f6" address="unix:///run/containerd/s/6bf580797ff79fcd3bb2af7398f4dcaed2ece1e8cf8d8d357f2d22c10b54a19e" protocol=ttrpc version=3 Jan 28 04:11:48.085990 containerd[1648]: time="2026-01-28T04:11:48.085935989Z" level=info msg="connecting to shim f1c50ec1a9c7fa8ea2f436f430b24c744ca47e6278b67712b1512b7c1959525c" address="unix:///run/containerd/s/cb2134bbd181039b1adc8b4d32d28c324104d4fcdf28b7e919982c530ef6fef3" protocol=ttrpc version=3 Jan 28 04:11:48.092952 containerd[1648]: time="2026-01-28T04:11:48.092906985Z" level=info msg="CreateContainer within sandbox \"15f57ae44c27bb8ae300f25d5199cdbdd470d1356e614c4482f92e4228e4a735\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b055a9b67baeebfe6ae481d9ce8d6a14ee9618ab5ec818b3bcb5f36e0e4ad24b\"" Jan 28 04:11:48.093683 containerd[1648]: time="2026-01-28T04:11:48.093643158Z" level=info msg="StartContainer for \"b055a9b67baeebfe6ae481d9ce8d6a14ee9618ab5ec818b3bcb5f36e0e4ad24b\"" Jan 28 04:11:48.095378 containerd[1648]: time="2026-01-28T04:11:48.095311946Z" level=info msg="connecting to shim b055a9b67baeebfe6ae481d9ce8d6a14ee9618ab5ec818b3bcb5f36e0e4ad24b" address="unix:///run/containerd/s/a5336fba950ac5d5f0acf0fb28ca2ea06c2bcf8d86e49be597b324e8472a649b" protocol=ttrpc version=3 Jan 28 04:11:48.126619 systemd[1]: Started cri-containerd-f1c50ec1a9c7fa8ea2f436f430b24c744ca47e6278b67712b1512b7c1959525c.scope - libcontainer container f1c50ec1a9c7fa8ea2f436f430b24c744ca47e6278b67712b1512b7c1959525c. Jan 28 04:11:48.132849 kubelet[2569]: W0128 04:11:48.132754 2569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.66.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.66.102:6443: connect: connection refused Jan 28 04:11:48.132942 kubelet[2569]: E0128 04:11:48.132870 2569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.66.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:48.137724 systemd[1]: Started cri-containerd-8e1a5a3d6db370df3257e564ae023bb444dd9aab3ecbda4363bfc7578d2566f6.scope - libcontainer container 8e1a5a3d6db370df3257e564ae023bb444dd9aab3ecbda4363bfc7578d2566f6. Jan 28 04:11:48.160501 systemd[1]: Started cri-containerd-b055a9b67baeebfe6ae481d9ce8d6a14ee9618ab5ec818b3bcb5f36e0e4ad24b.scope - libcontainer container b055a9b67baeebfe6ae481d9ce8d6a14ee9618ab5ec818b3bcb5f36e0e4ad24b. Jan 28 04:11:48.183000 audit: BPF prog-id=101 op=LOAD Jan 28 04:11:48.184000 audit: BPF prog-id=102 op=LOAD Jan 28 04:11:48.184000 audit[2746]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2628 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316135613364366462333730646633323537653536346165303233 Jan 28 04:11:48.184000 audit: BPF prog-id=102 op=UNLOAD Jan 28 04:11:48.184000 audit[2746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316135613364366462333730646633323537653536346165303233 Jan 28 04:11:48.185000 audit: BPF prog-id=103 op=LOAD Jan 28 04:11:48.185000 audit[2746]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2628 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316135613364366462333730646633323537653536346165303233 Jan 28 04:11:48.186000 audit: BPF prog-id=104 op=LOAD Jan 28 04:11:48.186000 audit[2746]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2628 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316135613364366462333730646633323537653536346165303233 Jan 28 04:11:48.186000 audit: BPF prog-id=104 op=UNLOAD Jan 28 04:11:48.186000 audit[2746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316135613364366462333730646633323537653536346165303233 Jan 28 04:11:48.186000 audit: BPF prog-id=103 op=UNLOAD Jan 28 04:11:48.186000 audit[2746]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316135613364366462333730646633323537653536346165303233 Jan 28 04:11:48.186000 audit: BPF prog-id=105 op=LOAD Jan 28 04:11:48.186000 audit[2746]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2628 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865316135613364366462333730646633323537653536346165303233 Jan 28 04:11:48.194000 audit: BPF prog-id=106 op=LOAD Jan 28 04:11:48.194000 audit: BPF prog-id=107 op=LOAD Jan 28 04:11:48.194000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2630 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353561396236376261656562666536616534383164396365386436 Jan 28 04:11:48.194000 audit: BPF prog-id=107 op=UNLOAD Jan 28 04:11:48.194000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353561396236376261656562666536616534383164396365386436 Jan 28 04:11:48.195000 audit: BPF prog-id=108 op=LOAD Jan 28 04:11:48.195000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2630 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353561396236376261656562666536616534383164396365386436 Jan 28 04:11:48.196000 audit: BPF prog-id=109 op=LOAD Jan 28 04:11:48.196000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2630 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353561396236376261656562666536616534383164396365386436 Jan 28 04:11:48.197000 audit: BPF prog-id=109 op=UNLOAD Jan 28 04:11:48.197000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353561396236376261656562666536616534383164396365386436 Jan 28 04:11:48.197000 audit: BPF prog-id=108 op=UNLOAD Jan 28 04:11:48.197000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353561396236376261656562666536616534383164396365386436 Jan 28 04:11:48.197000 audit: BPF prog-id=110 op=LOAD Jan 28 04:11:48.197000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2630 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353561396236376261656562666536616534383164396365386436 Jan 28 04:11:48.199000 audit: BPF prog-id=111 op=LOAD Jan 28 04:11:48.200000 audit: BPF prog-id=112 op=LOAD Jan 28 04:11:48.200000 audit[2745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2622 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631633530656331613963376661386561326634333666343330623234 Jan 28 04:11:48.200000 audit: BPF prog-id=112 op=UNLOAD Jan 28 04:11:48.200000 audit[2745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631633530656331613963376661386561326634333666343330623234 Jan 28 04:11:48.200000 audit: BPF prog-id=113 op=LOAD Jan 28 04:11:48.200000 audit[2745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2622 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631633530656331613963376661386561326634333666343330623234 Jan 28 04:11:48.201000 audit: BPF prog-id=114 op=LOAD Jan 28 04:11:48.201000 audit[2745]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2622 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631633530656331613963376661386561326634333666343330623234 Jan 28 04:11:48.202000 audit: BPF prog-id=114 op=UNLOAD Jan 28 04:11:48.202000 audit[2745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631633530656331613963376661386561326634333666343330623234 Jan 28 04:11:48.202000 audit: BPF prog-id=113 op=UNLOAD Jan 28 04:11:48.202000 audit[2745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631633530656331613963376661386561326634333666343330623234 Jan 28 04:11:48.203000 audit: BPF prog-id=115 op=LOAD Jan 28 04:11:48.203000 audit[2745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2622 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:11:48.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631633530656331613963376661386561326634333666343330623234 Jan 28 04:11:48.260037 kubelet[2569]: W0128 04:11:48.259698 2569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.66.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.66.102:6443: connect: connection refused Jan 28 04:11:48.260037 kubelet[2569]: E0128 04:11:48.259954 2569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.66.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:48.480897 kubelet[2569]: E0128 04:11:48.480826 2569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3avyi.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.102:6443: connect: connection refused" interval="1.6s" Jan 28 04:11:48.538682 kubelet[2569]: W0128 04:11:48.538576 2569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.66.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.66.102:6443: connect: connection refused Jan 28 04:11:48.541072 kubelet[2569]: E0128 04:11:48.539952 2569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.66.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.66.102:6443: connect: connection refused" logger="UnhandledError" Jan 28 04:11:48.555924 containerd[1648]: time="2026-01-28T04:11:48.555613639Z" level=info msg="StartContainer for \"b055a9b67baeebfe6ae481d9ce8d6a14ee9618ab5ec818b3bcb5f36e0e4ad24b\" returns successfully" Jan 28 04:11:48.559231 containerd[1648]: time="2026-01-28T04:11:48.559130143Z" level=info msg="StartContainer for \"8e1a5a3d6db370df3257e564ae023bb444dd9aab3ecbda4363bfc7578d2566f6\" returns successfully" Jan 28 04:11:48.559584 containerd[1648]: time="2026-01-28T04:11:48.559538126Z" level=info msg="StartContainer for \"f1c50ec1a9c7fa8ea2f436f430b24c744ca47e6278b67712b1512b7c1959525c\" returns successfully" Jan 28 04:11:48.696848 kubelet[2569]: I0128 04:11:48.696796 2569 kubelet_node_status.go:75] "Attempting to register node" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:48.697733 kubelet[2569]: E0128 04:11:48.697693 2569 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.102:6443/api/v1/nodes\": dial tcp 10.230.66.102:6443: connect: connection refused" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:49.162799 kubelet[2569]: E0128 04:11:49.162559 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:49.167699 kubelet[2569]: E0128 04:11:49.167668 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:49.171556 kubelet[2569]: E0128 04:11:49.171531 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:50.184834 kubelet[2569]: E0128 04:11:50.184799 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:50.187480 kubelet[2569]: E0128 04:11:50.186370 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:50.187480 kubelet[2569]: E0128 04:11:50.186706 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:50.301142 kubelet[2569]: I0128 04:11:50.300998 2569 kubelet_node_status.go:75] "Attempting to register node" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.185691 kubelet[2569]: E0128 04:11:51.183968 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.193093 kubelet[2569]: E0128 04:11:51.193050 2569 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.352458 kubelet[2569]: E0128 04:11:51.352389 2569 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-3avyi.gb1.brightbox.com\" not found" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.376653 kubelet[2569]: I0128 04:11:51.376451 2569 kubelet_node_status.go:78] "Successfully registered node" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.376850 kubelet[2569]: E0128 04:11:51.376673 2569 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-3avyi.gb1.brightbox.com\": node \"srv-3avyi.gb1.brightbox.com\" not found" Jan 28 04:11:51.378312 kubelet[2569]: I0128 04:11:51.377249 2569 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.418604 kubelet[2569]: E0128 04:11:51.418537 2569 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.418604 kubelet[2569]: I0128 04:11:51.418599 2569 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.423297 kubelet[2569]: E0128 04:11:51.422593 2569 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-3avyi.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.423297 kubelet[2569]: I0128 04:11:51.422645 2569 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:51.426733 kubelet[2569]: E0128 04:11:51.426696 2569 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:52.051893 kubelet[2569]: I0128 04:11:52.051427 2569 apiserver.go:52] "Watching apiserver" Jan 28 04:11:52.075485 kubelet[2569]: I0128 04:11:52.075457 2569 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 04:11:53.006107 kubelet[2569]: I0128 04:11:53.006064 2569 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:53.018158 kubelet[2569]: W0128 04:11:53.017640 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 04:11:53.305980 kubelet[2569]: I0128 04:11:53.305616 2569 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:53.310539 kubelet[2569]: W0128 04:11:53.310225 2569 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 04:11:53.405656 systemd[1]: Reload requested from client PID 2839 ('systemctl') (unit session-12.scope)... Jan 28 04:11:53.405687 systemd[1]: Reloading... Jan 28 04:11:53.571315 zram_generator::config[2890]: No configuration found. Jan 28 04:11:53.942851 systemd[1]: Reloading finished in 536 ms. Jan 28 04:11:53.982799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:11:54.002644 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 04:11:54.004354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:54.009636 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 28 04:11:54.009706 kernel: audit: type=1131 audit(1769573514.003:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:54.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:54.004478 systemd[1]: kubelet.service: Consumed 1.269s CPU time, 127.7M memory peak. Jan 28 04:11:54.011935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 04:11:54.012000 audit: BPF prog-id=116 op=LOAD Jan 28 04:11:54.016846 kernel: audit: type=1334 audit(1769573514.012:397): prog-id=116 op=LOAD Jan 28 04:11:54.016905 kernel: audit: type=1334 audit(1769573514.012:398): prog-id=76 op=UNLOAD Jan 28 04:11:54.012000 audit: BPF prog-id=76 op=UNLOAD Jan 28 04:11:54.018332 kernel: audit: type=1334 audit(1769573514.013:399): prog-id=117 op=LOAD Jan 28 04:11:54.013000 audit: BPF prog-id=117 op=LOAD Jan 28 04:11:54.019743 kernel: audit: type=1334 audit(1769573514.013:400): prog-id=118 op=LOAD Jan 28 04:11:54.013000 audit: BPF prog-id=118 op=LOAD Jan 28 04:11:54.021153 kernel: audit: type=1334 audit(1769573514.013:401): prog-id=77 op=UNLOAD Jan 28 04:11:54.013000 audit: BPF prog-id=77 op=UNLOAD Jan 28 04:11:54.013000 audit: BPF prog-id=78 op=UNLOAD Jan 28 04:11:54.023892 kernel: audit: type=1334 audit(1769573514.013:402): prog-id=78 op=UNLOAD Jan 28 04:11:54.023958 kernel: audit: type=1334 audit(1769573514.019:403): prog-id=119 op=LOAD Jan 28 04:11:54.019000 audit: BPF prog-id=119 op=LOAD Jan 28 04:11:54.025335 kernel: audit: type=1334 audit(1769573514.019:404): prog-id=69 op=UNLOAD Jan 28 04:11:54.019000 audit: BPF prog-id=69 op=UNLOAD Jan 28 04:11:54.026723 kernel: audit: type=1334 audit(1769573514.019:405): prog-id=120 op=LOAD Jan 28 04:11:54.019000 audit: BPF prog-id=120 op=LOAD Jan 28 04:11:54.019000 audit: BPF prog-id=121 op=LOAD Jan 28 04:11:54.019000 audit: BPF prog-id=70 op=UNLOAD Jan 28 04:11:54.019000 audit: BPF prog-id=71 op=UNLOAD Jan 28 04:11:54.021000 audit: BPF prog-id=122 op=LOAD Jan 28 04:11:54.021000 audit: BPF prog-id=82 op=UNLOAD Jan 28 04:11:54.024000 audit: BPF prog-id=123 op=LOAD Jan 28 04:11:54.024000 audit: BPF prog-id=68 op=UNLOAD Jan 28 04:11:54.026000 audit: BPF prog-id=124 op=LOAD Jan 28 04:11:54.026000 audit: BPF prog-id=72 op=UNLOAD Jan 28 04:11:54.027000 audit: BPF prog-id=125 op=LOAD Jan 28 04:11:54.039000 audit: BPF prog-id=85 op=UNLOAD Jan 28 04:11:54.040000 audit: BPF prog-id=126 op=LOAD Jan 28 04:11:54.040000 audit: BPF prog-id=73 op=UNLOAD Jan 28 04:11:54.040000 audit: BPF prog-id=127 op=LOAD Jan 28 04:11:54.040000 audit: BPF prog-id=128 op=LOAD Jan 28 04:11:54.040000 audit: BPF prog-id=74 op=UNLOAD Jan 28 04:11:54.040000 audit: BPF prog-id=75 op=UNLOAD Jan 28 04:11:54.042000 audit: BPF prog-id=129 op=LOAD Jan 28 04:11:54.042000 audit: BPF prog-id=65 op=UNLOAD Jan 28 04:11:54.042000 audit: BPF prog-id=130 op=LOAD Jan 28 04:11:54.042000 audit: BPF prog-id=131 op=LOAD Jan 28 04:11:54.042000 audit: BPF prog-id=66 op=UNLOAD Jan 28 04:11:54.042000 audit: BPF prog-id=67 op=UNLOAD Jan 28 04:11:54.044000 audit: BPF prog-id=132 op=LOAD Jan 28 04:11:54.045000 audit: BPF prog-id=133 op=LOAD Jan 28 04:11:54.045000 audit: BPF prog-id=83 op=UNLOAD Jan 28 04:11:54.045000 audit: BPF prog-id=84 op=UNLOAD Jan 28 04:11:54.045000 audit: BPF prog-id=134 op=LOAD Jan 28 04:11:54.045000 audit: BPF prog-id=79 op=UNLOAD Jan 28 04:11:54.046000 audit: BPF prog-id=135 op=LOAD Jan 28 04:11:54.046000 audit: BPF prog-id=136 op=LOAD Jan 28 04:11:54.046000 audit: BPF prog-id=80 op=UNLOAD Jan 28 04:11:54.046000 audit: BPF prog-id=81 op=UNLOAD Jan 28 04:11:54.345246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 04:11:54.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:11:54.358032 (kubelet)[2950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 04:11:54.477844 kubelet[2950]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 04:11:54.479277 kubelet[2950]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 04:11:54.479277 kubelet[2950]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 04:11:54.479277 kubelet[2950]: I0128 04:11:54.478702 2950 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 04:11:54.494117 kubelet[2950]: I0128 04:11:54.494043 2950 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 04:11:54.494383 kubelet[2950]: I0128 04:11:54.494352 2950 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 04:11:54.495048 kubelet[2950]: I0128 04:11:54.495026 2950 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 04:11:54.497626 kubelet[2950]: I0128 04:11:54.497602 2950 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 04:11:54.502278 kubelet[2950]: I0128 04:11:54.501650 2950 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 04:11:54.512925 kubelet[2950]: I0128 04:11:54.512903 2950 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 04:11:54.521399 kubelet[2950]: I0128 04:11:54.521372 2950 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 04:11:54.524273 kubelet[2950]: I0128 04:11:54.524212 2950 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 04:11:54.524878 kubelet[2950]: I0128 04:11:54.524390 2950 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3avyi.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 04:11:54.525128 kubelet[2950]: I0128 04:11:54.525099 2950 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 04:11:54.525454 kubelet[2950]: I0128 04:11:54.525210 2950 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 04:11:54.525454 kubelet[2950]: I0128 04:11:54.525379 2950 state_mem.go:36] "Initialized new in-memory state store" Jan 28 04:11:54.525855 kubelet[2950]: I0128 04:11:54.525833 2950 kubelet.go:446] "Attempting to sync node with API server" Jan 28 04:11:54.526003 kubelet[2950]: I0128 04:11:54.525983 2950 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 04:11:54.526183 kubelet[2950]: I0128 04:11:54.526164 2950 kubelet.go:352] "Adding apiserver pod source" Jan 28 04:11:54.526325 kubelet[2950]: I0128 04:11:54.526305 2950 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 04:11:54.534950 kubelet[2950]: I0128 04:11:54.534755 2950 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 04:11:54.539101 kubelet[2950]: I0128 04:11:54.537871 2950 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 04:11:54.539101 kubelet[2950]: I0128 04:11:54.538675 2950 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 04:11:54.539101 kubelet[2950]: I0128 04:11:54.538728 2950 server.go:1287] "Started kubelet" Jan 28 04:11:54.551716 kubelet[2950]: I0128 04:11:54.550815 2950 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 04:11:54.570739 kubelet[2950]: I0128 04:11:54.569411 2950 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 04:11:54.573374 kubelet[2950]: I0128 04:11:54.573340 2950 server.go:479] "Adding debug handlers to kubelet server" Jan 28 04:11:54.579179 kubelet[2950]: I0128 04:11:54.578161 2950 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 04:11:54.585200 kubelet[2950]: I0128 04:11:54.585087 2950 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 04:11:54.586987 kubelet[2950]: I0128 04:11:54.579481 2950 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 04:11:54.591012 kubelet[2950]: I0128 04:11:54.578815 2950 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 04:11:54.594824 kubelet[2950]: E0128 04:11:54.579687 2950 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-3avyi.gb1.brightbox.com\" not found" Jan 28 04:11:54.594968 kubelet[2950]: I0128 04:11:54.592845 2950 factory.go:221] Registration of the systemd container factory successfully Jan 28 04:11:54.595555 kubelet[2950]: I0128 04:11:54.595321 2950 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 04:11:54.597321 kubelet[2950]: I0128 04:11:54.579508 2950 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 04:11:54.599428 kubelet[2950]: I0128 04:11:54.599038 2950 reconciler.go:26] "Reconciler: start to sync state" Jan 28 04:11:54.600346 kubelet[2950]: I0128 04:11:54.599680 2950 factory.go:221] Registration of the containerd container factory successfully Jan 28 04:11:54.605507 kubelet[2950]: I0128 04:11:54.605425 2950 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 04:11:54.611367 kubelet[2950]: I0128 04:11:54.611315 2950 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 04:11:54.611521 kubelet[2950]: I0128 04:11:54.611396 2950 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 04:11:54.611521 kubelet[2950]: I0128 04:11:54.611438 2950 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 04:11:54.611521 kubelet[2950]: I0128 04:11:54.611453 2950 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 04:11:54.611672 kubelet[2950]: E0128 04:11:54.611544 2950 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 04:11:54.701576 kubelet[2950]: I0128 04:11:54.701042 2950 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 04:11:54.701576 kubelet[2950]: I0128 04:11:54.701071 2950 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 04:11:54.701576 kubelet[2950]: I0128 04:11:54.701119 2950 state_mem.go:36] "Initialized new in-memory state store" Jan 28 04:11:54.701812 kubelet[2950]: I0128 04:11:54.701685 2950 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 04:11:54.701812 kubelet[2950]: I0128 04:11:54.701705 2950 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 04:11:54.701812 kubelet[2950]: I0128 04:11:54.701768 2950 policy_none.go:49] "None policy: Start" Jan 28 04:11:54.701941 kubelet[2950]: I0128 04:11:54.701796 2950 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 04:11:54.701941 kubelet[2950]: I0128 04:11:54.701849 2950 state_mem.go:35] "Initializing new in-memory state store" Jan 28 04:11:54.702984 kubelet[2950]: I0128 04:11:54.702668 2950 state_mem.go:75] "Updated machine memory state" Jan 28 04:11:54.712179 kubelet[2950]: E0128 04:11:54.711784 2950 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 04:11:54.716632 kubelet[2950]: I0128 04:11:54.716584 2950 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 04:11:54.717866 kubelet[2950]: I0128 04:11:54.717427 2950 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 04:11:54.717866 kubelet[2950]: I0128 04:11:54.717503 2950 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 04:11:54.719038 kubelet[2950]: I0128 04:11:54.718879 2950 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 04:11:54.728698 kubelet[2950]: E0128 04:11:54.728652 2950 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 04:11:54.850560 kubelet[2950]: I0128 04:11:54.850376 2950 kubelet_node_status.go:75] "Attempting to register node" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:54.864704 kubelet[2950]: I0128 04:11:54.864463 2950 kubelet_node_status.go:124] "Node was previously registered" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:54.866061 kubelet[2950]: I0128 04:11:54.865504 2950 kubelet_node_status.go:78] "Successfully registered node" node="srv-3avyi.gb1.brightbox.com" Jan 28 04:11:54.916939 kubelet[2950]: I0128 04:11:54.916871 2950 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:54.922010 kubelet[2950]: I0128 04:11:54.920310 2950 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:54.922010 kubelet[2950]: I0128 04:11:54.921182 2950 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:54.932120 kubelet[2950]: W0128 04:11:54.932084 2950 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 04:11:54.932277 kubelet[2950]: E0128 04:11:54.932155 2950 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:54.933713 kubelet[2950]: W0128 04:11:54.932527 2950 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 04:11:54.933713 kubelet[2950]: W0128 04:11:54.932730 2950 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 04:11:54.933713 kubelet[2950]: E0128 04:11:54.932765 2950 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-3avyi.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.000564 kubelet[2950]: I0128 04:11:55.000486 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-kubeconfig\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.000564 kubelet[2950]: I0128 04:11:55.000554 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3de0aba95ea39af80e6d3ceefa973fe-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" (UID: \"a3de0aba95ea39af80e6d3ceefa973fe\") " pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.000805 kubelet[2950]: I0128 04:11:55.000590 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-k8s-certs\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.000805 kubelet[2950]: I0128 04:11:55.000617 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-ca-certs\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.000805 kubelet[2950]: I0128 04:11:55.000644 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-flexvolume-dir\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.000805 kubelet[2950]: I0128 04:11:55.000673 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11053e522131eb66054c9b582af885d5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3avyi.gb1.brightbox.com\" (UID: \"11053e522131eb66054c9b582af885d5\") " pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.000805 kubelet[2950]: I0128 04:11:55.000700 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc9e7e5a51a7f27d45b06bb0c9a8a0d6-kubeconfig\") pod \"kube-scheduler-srv-3avyi.gb1.brightbox.com\" (UID: \"dc9e7e5a51a7f27d45b06bb0c9a8a0d6\") " pod="kube-system/kube-scheduler-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.001041 kubelet[2950]: I0128 04:11:55.000727 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3de0aba95ea39af80e6d3ceefa973fe-ca-certs\") pod \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" (UID: \"a3de0aba95ea39af80e6d3ceefa973fe\") " pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.001041 kubelet[2950]: I0128 04:11:55.000753 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3de0aba95ea39af80e6d3ceefa973fe-k8s-certs\") pod \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" (UID: \"a3de0aba95ea39af80e6d3ceefa973fe\") " pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.529302 kubelet[2950]: I0128 04:11:55.528434 2950 apiserver.go:52] "Watching apiserver" Jan 28 04:11:55.598045 kubelet[2950]: I0128 04:11:55.597830 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" podStartSLOduration=2.597738962 podStartE2EDuration="2.597738962s" podCreationTimestamp="2026-01-28 04:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 04:11:55.581879179 +0000 UTC m=+1.193832951" watchObservedRunningTime="2026-01-28 04:11:55.597738962 +0000 UTC m=+1.209692740" Jan 28 04:11:55.598352 kubelet[2950]: I0128 04:11:55.598116 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-3avyi.gb1.brightbox.com" podStartSLOduration=2.598106681 podStartE2EDuration="2.598106681s" podCreationTimestamp="2026-01-28 04:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 04:11:55.597097976 +0000 UTC m=+1.209051787" watchObservedRunningTime="2026-01-28 04:11:55.598106681 +0000 UTC m=+1.210060465" Jan 28 04:11:55.599313 kubelet[2950]: I0128 04:11:55.598617 2950 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 04:11:55.618743 kubelet[2950]: I0128 04:11:55.618494 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-3avyi.gb1.brightbox.com" podStartSLOduration=1.618470584 podStartE2EDuration="1.618470584s" podCreationTimestamp="2026-01-28 04:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 04:11:55.617855717 +0000 UTC m=+1.229809521" watchObservedRunningTime="2026-01-28 04:11:55.618470584 +0000 UTC m=+1.230424370" Jan 28 04:11:55.664573 kubelet[2950]: I0128 04:11:55.664050 2950 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:11:55.681619 kubelet[2950]: W0128 04:11:55.681220 2950 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 04:11:55.681619 kubelet[2950]: E0128 04:11:55.681323 2950 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-3avyi.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-3avyi.gb1.brightbox.com" Jan 28 04:12:00.253056 kubelet[2950]: I0128 04:12:00.252906 2950 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 04:12:00.254372 containerd[1648]: time="2026-01-28T04:12:00.253572499Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 04:12:00.256618 kubelet[2950]: I0128 04:12:00.255819 2950 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 04:12:01.241222 kubelet[2950]: I0128 04:12:01.241169 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6999e16d-4ac6-4a74-9b44-796f8cc46df5-xtables-lock\") pod \"kube-proxy-fgbk9\" (UID: \"6999e16d-4ac6-4a74-9b44-796f8cc46df5\") " pod="kube-system/kube-proxy-fgbk9" Jan 28 04:12:01.241222 kubelet[2950]: I0128 04:12:01.241223 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm4n7\" (UniqueName: \"kubernetes.io/projected/6999e16d-4ac6-4a74-9b44-796f8cc46df5-kube-api-access-tm4n7\") pod \"kube-proxy-fgbk9\" (UID: \"6999e16d-4ac6-4a74-9b44-796f8cc46df5\") " pod="kube-system/kube-proxy-fgbk9" Jan 28 04:12:01.241669 kubelet[2950]: I0128 04:12:01.241635 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6999e16d-4ac6-4a74-9b44-796f8cc46df5-lib-modules\") pod \"kube-proxy-fgbk9\" (UID: \"6999e16d-4ac6-4a74-9b44-796f8cc46df5\") " pod="kube-system/kube-proxy-fgbk9" Jan 28 04:12:01.241757 kubelet[2950]: I0128 04:12:01.241728 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6999e16d-4ac6-4a74-9b44-796f8cc46df5-kube-proxy\") pod \"kube-proxy-fgbk9\" (UID: \"6999e16d-4ac6-4a74-9b44-796f8cc46df5\") " pod="kube-system/kube-proxy-fgbk9" Jan 28 04:12:01.248849 systemd[1]: Created slice kubepods-besteffort-pod6999e16d_4ac6_4a74_9b44_796f8cc46df5.slice - libcontainer container kubepods-besteffort-pod6999e16d_4ac6_4a74_9b44_796f8cc46df5.slice. Jan 28 04:12:01.442635 systemd[1]: Created slice kubepods-besteffort-podf68ff9b8_f4b4_4d57_a54e_da7e5203a51b.slice - libcontainer container kubepods-besteffort-podf68ff9b8_f4b4_4d57_a54e_da7e5203a51b.slice. Jan 28 04:12:01.444242 kubelet[2950]: I0128 04:12:01.444194 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzl99\" (UniqueName: \"kubernetes.io/projected/f68ff9b8-f4b4-4d57-a54e-da7e5203a51b-kube-api-access-jzl99\") pod \"tigera-operator-7dcd859c48-zgmtb\" (UID: \"f68ff9b8-f4b4-4d57-a54e-da7e5203a51b\") " pod="tigera-operator/tigera-operator-7dcd859c48-zgmtb" Jan 28 04:12:01.446578 kubelet[2950]: I0128 04:12:01.444270 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f68ff9b8-f4b4-4d57-a54e-da7e5203a51b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-zgmtb\" (UID: \"f68ff9b8-f4b4-4d57-a54e-da7e5203a51b\") " pod="tigera-operator/tigera-operator-7dcd859c48-zgmtb" Jan 28 04:12:01.565151 containerd[1648]: time="2026-01-28T04:12:01.564346398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgbk9,Uid:6999e16d-4ac6-4a74-9b44-796f8cc46df5,Namespace:kube-system,Attempt:0,}" Jan 28 04:12:01.638035 containerd[1648]: time="2026-01-28T04:12:01.637934921Z" level=info msg="connecting to shim 876f95cebdddb806cf2c01b08b3721c43c9b6bfd33ae178ffa66c933eebee7f6" address="unix:///run/containerd/s/1701cc53a259f299948fda989538bfe6a5952451a1b3db1b80b1ddfe5effe467" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:12:01.710824 systemd[1]: Started cri-containerd-876f95cebdddb806cf2c01b08b3721c43c9b6bfd33ae178ffa66c933eebee7f6.scope - libcontainer container 876f95cebdddb806cf2c01b08b3721c43c9b6bfd33ae178ffa66c933eebee7f6. Jan 28 04:12:01.729000 audit: BPF prog-id=137 op=LOAD Jan 28 04:12:01.734432 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 28 04:12:01.734561 kernel: audit: type=1334 audit(1769573521.729:440): prog-id=137 op=LOAD Jan 28 04:12:01.737000 audit: BPF prog-id=138 op=LOAD Jan 28 04:12:01.737000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.742459 kernel: audit: type=1334 audit(1769573521.737:441): prog-id=138 op=LOAD Jan 28 04:12:01.742528 kernel: audit: type=1300 audit(1769573521.737:441): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.751294 kernel: audit: type=1327 audit(1769573521.737:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.737000 audit: BPF prog-id=138 op=UNLOAD Jan 28 04:12:01.737000 audit[3021]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.754536 containerd[1648]: time="2026-01-28T04:12:01.753105839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zgmtb,Uid:f68ff9b8-f4b4-4d57-a54e-da7e5203a51b,Namespace:tigera-operator,Attempt:0,}" Jan 28 04:12:01.755467 kernel: audit: type=1334 audit(1769573521.737:442): prog-id=138 op=UNLOAD Jan 28 04:12:01.756294 kernel: audit: type=1300 audit(1769573521.737:442): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.760275 kernel: audit: type=1327 audit(1769573521.737:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.738000 audit: BPF prog-id=139 op=LOAD Jan 28 04:12:01.738000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.767361 kernel: audit: type=1334 audit(1769573521.738:443): prog-id=139 op=LOAD Jan 28 04:12:01.767999 kernel: audit: type=1300 audit(1769573521.738:443): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.772651 kernel: audit: type=1327 audit(1769573521.738:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.738000 audit: BPF prog-id=140 op=LOAD Jan 28 04:12:01.738000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.738000 audit: BPF prog-id=140 op=UNLOAD Jan 28 04:12:01.738000 audit[3021]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.738000 audit: BPF prog-id=139 op=UNLOAD Jan 28 04:12:01.738000 audit[3021]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.738000 audit: BPF prog-id=141 op=LOAD Jan 28 04:12:01.738000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=3009 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366639356365626464646238303663663263303162303862333732 Jan 28 04:12:01.813979 containerd[1648]: time="2026-01-28T04:12:01.813890885Z" level=info msg="connecting to shim e378687ae88e0eca8095435feb63efc169ed7c8a09d2de546aebfe89d5f025ca" address="unix:///run/containerd/s/a40e24225a3416a737c49523b38ca33bf4c928731eeb83976dcff7452762da97" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:12:01.823363 containerd[1648]: time="2026-01-28T04:12:01.823030111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgbk9,Uid:6999e16d-4ac6-4a74-9b44-796f8cc46df5,Namespace:kube-system,Attempt:0,} returns sandbox id \"876f95cebdddb806cf2c01b08b3721c43c9b6bfd33ae178ffa66c933eebee7f6\"" Jan 28 04:12:01.837291 containerd[1648]: time="2026-01-28T04:12:01.836926390Z" level=info msg="CreateContainer within sandbox \"876f95cebdddb806cf2c01b08b3721c43c9b6bfd33ae178ffa66c933eebee7f6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 04:12:01.879804 systemd[1]: Started cri-containerd-e378687ae88e0eca8095435feb63efc169ed7c8a09d2de546aebfe89d5f025ca.scope - libcontainer container e378687ae88e0eca8095435feb63efc169ed7c8a09d2de546aebfe89d5f025ca. Jan 28 04:12:01.884466 containerd[1648]: time="2026-01-28T04:12:01.884145171Z" level=info msg="Container 9a12d7e70f9a21e788eb2dfaf8c83fc4bf5b13bc61404f0175506709114ca5c9: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:12:01.901000 audit: BPF prog-id=142 op=LOAD Jan 28 04:12:01.902000 audit: BPF prog-id=143 op=LOAD Jan 28 04:12:01.902000 audit[3066]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3055 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533373836383761653838653065636138303935343335666562363365 Jan 28 04:12:01.902000 audit: BPF prog-id=143 op=UNLOAD Jan 28 04:12:01.902000 audit[3066]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533373836383761653838653065636138303935343335666562363365 Jan 28 04:12:01.903000 audit: BPF prog-id=144 op=LOAD Jan 28 04:12:01.903000 audit[3066]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3055 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533373836383761653838653065636138303935343335666562363365 Jan 28 04:12:01.903000 audit: BPF prog-id=145 op=LOAD Jan 28 04:12:01.903000 audit[3066]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3055 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533373836383761653838653065636138303935343335666562363365 Jan 28 04:12:01.903000 audit: BPF prog-id=145 op=UNLOAD Jan 28 04:12:01.903000 audit[3066]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533373836383761653838653065636138303935343335666562363365 Jan 28 04:12:01.903000 audit: BPF prog-id=144 op=UNLOAD Jan 28 04:12:01.903000 audit[3066]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533373836383761653838653065636138303935343335666562363365 Jan 28 04:12:01.903000 audit: BPF prog-id=146 op=LOAD Jan 28 04:12:01.903000 audit[3066]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3055 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:01.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533373836383761653838653065636138303935343335666562363365 Jan 28 04:12:01.914888 containerd[1648]: time="2026-01-28T04:12:01.914807160Z" level=info msg="CreateContainer within sandbox \"876f95cebdddb806cf2c01b08b3721c43c9b6bfd33ae178ffa66c933eebee7f6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a12d7e70f9a21e788eb2dfaf8c83fc4bf5b13bc61404f0175506709114ca5c9\"" Jan 28 04:12:01.917152 containerd[1648]: time="2026-01-28T04:12:01.916973822Z" level=info msg="StartContainer for \"9a12d7e70f9a21e788eb2dfaf8c83fc4bf5b13bc61404f0175506709114ca5c9\"" Jan 28 04:12:01.920592 containerd[1648]: time="2026-01-28T04:12:01.920527602Z" level=info msg="connecting to shim 9a12d7e70f9a21e788eb2dfaf8c83fc4bf5b13bc61404f0175506709114ca5c9" address="unix:///run/containerd/s/1701cc53a259f299948fda989538bfe6a5952451a1b3db1b80b1ddfe5effe467" protocol=ttrpc version=3 Jan 28 04:12:01.957910 systemd[1]: Started cri-containerd-9a12d7e70f9a21e788eb2dfaf8c83fc4bf5b13bc61404f0175506709114ca5c9.scope - libcontainer container 9a12d7e70f9a21e788eb2dfaf8c83fc4bf5b13bc61404f0175506709114ca5c9. Jan 28 04:12:01.996499 containerd[1648]: time="2026-01-28T04:12:01.996354558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zgmtb,Uid:f68ff9b8-f4b4-4d57-a54e-da7e5203a51b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e378687ae88e0eca8095435feb63efc169ed7c8a09d2de546aebfe89d5f025ca\"" Jan 28 04:12:02.001239 containerd[1648]: time="2026-01-28T04:12:02.000680683Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 04:12:02.063000 audit: BPF prog-id=147 op=LOAD Jan 28 04:12:02.063000 audit[3085]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3009 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961313264376537306639613231653738386562326466616638633833 Jan 28 04:12:02.063000 audit: BPF prog-id=148 op=LOAD Jan 28 04:12:02.063000 audit[3085]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3009 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961313264376537306639613231653738386562326466616638633833 Jan 28 04:12:02.063000 audit: BPF prog-id=148 op=UNLOAD Jan 28 04:12:02.063000 audit[3085]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961313264376537306639613231653738386562326466616638633833 Jan 28 04:12:02.063000 audit: BPF prog-id=147 op=UNLOAD Jan 28 04:12:02.063000 audit[3085]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3009 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961313264376537306639613231653738386562326466616638633833 Jan 28 04:12:02.063000 audit: BPF prog-id=149 op=LOAD Jan 28 04:12:02.063000 audit[3085]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3009 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961313264376537306639613231653738386562326466616638633833 Jan 28 04:12:02.096800 containerd[1648]: time="2026-01-28T04:12:02.095477765Z" level=info msg="StartContainer for \"9a12d7e70f9a21e788eb2dfaf8c83fc4bf5b13bc61404f0175506709114ca5c9\" returns successfully" Jan 28 04:12:02.595000 audit[3155]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.595000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd39584190 a2=0 a3=7ffd3958417c items=0 ppid=3100 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 04:12:02.599000 audit[3156]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.599000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebeb84b20 a2=0 a3=7ffebeb84b0c items=0 ppid=3100 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 04:12:02.601000 audit[3157]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.601000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd70c3cee0 a2=0 a3=7ffd70c3cecc items=0 ppid=3100 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 04:12:02.602000 audit[3159]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.602000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2fd11f80 a2=0 a3=7ffc2fd11f6c items=0 ppid=3100 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.603000 audit[3160]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 04:12:02.603000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb90b0db0 a2=0 a3=7fffb90b0d9c items=0 ppid=3100 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 04:12:02.604000 audit[3161]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.604000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc76005670 a2=0 a3=7ffc7600565c items=0 ppid=3100 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.604000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 04:12:02.720503 kubelet[2950]: I0128 04:12:02.720010 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fgbk9" podStartSLOduration=1.719981882 podStartE2EDuration="1.719981882s" podCreationTimestamp="2026-01-28 04:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 04:12:02.719460068 +0000 UTC m=+8.331413852" watchObservedRunningTime="2026-01-28 04:12:02.719981882 +0000 UTC m=+8.331935674" Jan 28 04:12:02.722000 audit[3162]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.722000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd40259600 a2=0 a3=7ffd402595ec items=0 ppid=3100 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 04:12:02.742000 audit[3164]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.742000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc99480e80 a2=0 a3=7ffc99480e6c items=0 ppid=3100 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 04:12:02.749000 audit[3167]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.749000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcd7d04fb0 a2=0 a3=7ffcd7d04f9c items=0 ppid=3100 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 04:12:02.750000 audit[3168]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.750000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff77839000 a2=0 a3=7fff77838fec items=0 ppid=3100 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 04:12:02.755000 audit[3170]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.755000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc2fd3aa60 a2=0 a3=7ffc2fd3aa4c items=0 ppid=3100 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 04:12:02.758000 audit[3171]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.758000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff63e5bbd0 a2=0 a3=7fff63e5bbbc items=0 ppid=3100 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 04:12:02.762000 audit[3173]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.762000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8c539820 a2=0 a3=7ffc8c53980c items=0 ppid=3100 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 04:12:02.768000 audit[3176]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.768000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff41076cd0 a2=0 a3=7fff41076cbc items=0 ppid=3100 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 04:12:02.770000 audit[3177]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.770000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbace2ed0 a2=0 a3=7fffbace2ebc items=0 ppid=3100 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 04:12:02.774000 audit[3179]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.774000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa7619660 a2=0 a3=7fffa761964c items=0 ppid=3100 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 04:12:02.776000 audit[3180]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.776000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb9c5a770 a2=0 a3=7fffb9c5a75c items=0 ppid=3100 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.776000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 04:12:02.781000 audit[3182]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.781000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf766d720 a2=0 a3=7ffcf766d70c items=0 ppid=3100 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 04:12:02.787000 audit[3185]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.787000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffabb618c0 a2=0 a3=7fffabb618ac items=0 ppid=3100 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 04:12:02.793000 audit[3188]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.793000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc7105cfd0 a2=0 a3=7ffc7105cfbc items=0 ppid=3100 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 04:12:02.795000 audit[3189]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.795000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6c920b70 a2=0 a3=7ffc6c920b5c items=0 ppid=3100 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 04:12:02.800000 audit[3191]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.800000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe09e04fa0 a2=0 a3=7ffe09e04f8c items=0 ppid=3100 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 04:12:02.808000 audit[3194]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.808000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffa9b146a0 a2=0 a3=7fffa9b1468c items=0 ppid=3100 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 04:12:02.810000 audit[3195]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.810000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe03aedba0 a2=0 a3=7ffe03aedb8c items=0 ppid=3100 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 04:12:02.815000 audit[3197]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 04:12:02.815000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe5ac82320 a2=0 a3=7ffe5ac8230c items=0 ppid=3100 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 04:12:02.848000 audit[3203]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:02.848000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd01c42120 a2=0 a3=7ffd01c4210c items=0 ppid=3100 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:02.859000 audit[3203]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:02.859000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd01c42120 a2=0 a3=7ffd01c4210c items=0 ppid=3100 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.859000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:02.862000 audit[3208]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.862000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcb0ff1330 a2=0 a3=7ffcb0ff131c items=0 ppid=3100 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 04:12:02.866000 audit[3210]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.866000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd273a8790 a2=0 a3=7ffd273a877c items=0 ppid=3100 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 04:12:02.873000 audit[3213]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.873000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcac7e6e10 a2=0 a3=7ffcac7e6dfc items=0 ppid=3100 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.873000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 04:12:02.876000 audit[3214]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.876000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2363d5d0 a2=0 a3=7fff2363d5bc items=0 ppid=3100 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 04:12:02.881000 audit[3216]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.881000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd94e68fb0 a2=0 a3=7ffd94e68f9c items=0 ppid=3100 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 04:12:02.883000 audit[3217]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.883000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc00e21d0 a2=0 a3=7ffcc00e21bc items=0 ppid=3100 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 04:12:02.887000 audit[3219]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.887000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffab588a00 a2=0 a3=7fffab5889ec items=0 ppid=3100 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 04:12:02.893000 audit[3222]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.893000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcfe365f60 a2=0 a3=7ffcfe365f4c items=0 ppid=3100 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 04:12:02.896000 audit[3223]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.896000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4a0f9d90 a2=0 a3=7fff4a0f9d7c items=0 ppid=3100 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 04:12:02.900000 audit[3225]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.900000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff57b96bb0 a2=0 a3=7fff57b96b9c items=0 ppid=3100 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 04:12:02.902000 audit[3226]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.902000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2c642060 a2=0 a3=7ffc2c64204c items=0 ppid=3100 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 04:12:02.906000 audit[3228]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.906000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc49e2dff0 a2=0 a3=7ffc49e2dfdc items=0 ppid=3100 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 04:12:02.912000 audit[3231]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.912000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe23fffde0 a2=0 a3=7ffe23fffdcc items=0 ppid=3100 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 04:12:02.921000 audit[3234]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.921000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda6720c60 a2=0 a3=7ffda6720c4c items=0 ppid=3100 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 04:12:02.925000 audit[3235]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.925000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc91dabd30 a2=0 a3=7ffc91dabd1c items=0 ppid=3100 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 04:12:02.929000 audit[3237]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.929000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff15ede3f0 a2=0 a3=7fff15ede3dc items=0 ppid=3100 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 04:12:02.935000 audit[3240]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.935000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff38bab090 a2=0 a3=7fff38bab07c items=0 ppid=3100 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 04:12:02.937000 audit[3241]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.937000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4a1bc3b0 a2=0 a3=7ffc4a1bc39c items=0 ppid=3100 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 04:12:02.941000 audit[3243]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.941000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcb58421c0 a2=0 a3=7ffcb58421ac items=0 ppid=3100 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 04:12:02.943000 audit[3244]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.943000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe66fac810 a2=0 a3=7ffe66fac7fc items=0 ppid=3100 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 04:12:02.948000 audit[3246]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.948000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff2b0b3f70 a2=0 a3=7fff2b0b3f5c items=0 ppid=3100 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 04:12:02.953000 audit[3249]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 04:12:02.953000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc2bbb5140 a2=0 a3=7ffc2bbb512c items=0 ppid=3100 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 04:12:02.959000 audit[3251]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 04:12:02.959000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd74bdd840 a2=0 a3=7ffd74bdd82c items=0 ppid=3100 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.959000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:02.960000 audit[3251]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 04:12:02.960000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd74bdd840 a2=0 a3=7ffd74bdd82c items=0 ppid=3100 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:02.960000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:03.892062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300943039.mount: Deactivated successfully. Jan 28 04:12:05.406693 containerd[1648]: time="2026-01-28T04:12:05.406618789Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:05.407994 containerd[1648]: time="2026-01-28T04:12:05.407915279Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 28 04:12:05.431293 containerd[1648]: time="2026-01-28T04:12:05.431148716Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:05.453360 containerd[1648]: time="2026-01-28T04:12:05.453086353Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:05.454368 containerd[1648]: time="2026-01-28T04:12:05.454322416Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.452886796s" Jan 28 04:12:05.454535 containerd[1648]: time="2026-01-28T04:12:05.454501816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 04:12:05.460394 containerd[1648]: time="2026-01-28T04:12:05.460338415Z" level=info msg="CreateContainer within sandbox \"e378687ae88e0eca8095435feb63efc169ed7c8a09d2de546aebfe89d5f025ca\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 04:12:05.483326 containerd[1648]: time="2026-01-28T04:12:05.480884730Z" level=info msg="Container d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:12:05.485453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2122621346.mount: Deactivated successfully. Jan 28 04:12:05.509287 containerd[1648]: time="2026-01-28T04:12:05.509199196Z" level=info msg="CreateContainer within sandbox \"e378687ae88e0eca8095435feb63efc169ed7c8a09d2de546aebfe89d5f025ca\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784\"" Jan 28 04:12:05.510354 containerd[1648]: time="2026-01-28T04:12:05.510135696Z" level=info msg="StartContainer for \"d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784\"" Jan 28 04:12:05.512048 containerd[1648]: time="2026-01-28T04:12:05.511997984Z" level=info msg="connecting to shim d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784" address="unix:///run/containerd/s/a40e24225a3416a737c49523b38ca33bf4c928731eeb83976dcff7452762da97" protocol=ttrpc version=3 Jan 28 04:12:05.545543 systemd[1]: Started cri-containerd-d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784.scope - libcontainer container d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784. Jan 28 04:12:05.567000 audit: BPF prog-id=150 op=LOAD Jan 28 04:12:05.568000 audit: BPF prog-id=151 op=LOAD Jan 28 04:12:05.568000 audit[3261]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3055 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:05.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430386264373034303664623461633263386430366238633430613431 Jan 28 04:12:05.568000 audit: BPF prog-id=151 op=UNLOAD Jan 28 04:12:05.568000 audit[3261]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:05.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430386264373034303664623461633263386430366238633430613431 Jan 28 04:12:05.568000 audit: BPF prog-id=152 op=LOAD Jan 28 04:12:05.568000 audit[3261]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3055 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:05.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430386264373034303664623461633263386430366238633430613431 Jan 28 04:12:05.568000 audit: BPF prog-id=153 op=LOAD Jan 28 04:12:05.568000 audit[3261]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3055 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:05.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430386264373034303664623461633263386430366238633430613431 Jan 28 04:12:05.568000 audit: BPF prog-id=153 op=UNLOAD Jan 28 04:12:05.568000 audit[3261]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:05.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430386264373034303664623461633263386430366238633430613431 Jan 28 04:12:05.569000 audit: BPF prog-id=152 op=UNLOAD Jan 28 04:12:05.569000 audit[3261]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:05.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430386264373034303664623461633263386430366238633430613431 Jan 28 04:12:05.569000 audit: BPF prog-id=154 op=LOAD Jan 28 04:12:05.569000 audit[3261]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3055 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:05.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430386264373034303664623461633263386430366238633430613431 Jan 28 04:12:05.600450 containerd[1648]: time="2026-01-28T04:12:05.600400603Z" level=info msg="StartContainer for \"d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784\" returns successfully" Jan 28 04:12:06.577900 kubelet[2950]: I0128 04:12:06.577818 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-zgmtb" podStartSLOduration=2.120611289 podStartE2EDuration="5.577801445s" podCreationTimestamp="2026-01-28 04:12:01 +0000 UTC" firstStartedPulling="2026-01-28 04:12:01.998827705 +0000 UTC m=+7.610781477" lastFinishedPulling="2026-01-28 04:12:05.456017867 +0000 UTC m=+11.067971633" observedRunningTime="2026-01-28 04:12:05.725492842 +0000 UTC m=+11.337446634" watchObservedRunningTime="2026-01-28 04:12:06.577801445 +0000 UTC m=+12.189755232" Jan 28 04:12:09.201762 systemd[1]: cri-containerd-d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784.scope: Deactivated successfully. Jan 28 04:12:09.216399 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 28 04:12:09.216614 kernel: audit: type=1334 audit(1769573529.206:520): prog-id=150 op=UNLOAD Jan 28 04:12:09.206000 audit: BPF prog-id=150 op=UNLOAD Jan 28 04:12:09.206000 audit: BPF prog-id=154 op=UNLOAD Jan 28 04:12:09.221287 kernel: audit: type=1334 audit(1769573529.206:521): prog-id=154 op=UNLOAD Jan 28 04:12:09.256582 containerd[1648]: time="2026-01-28T04:12:09.256507978Z" level=info msg="received container exit event container_id:\"d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784\" id:\"d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784\" pid:3274 exit_status:1 exited_at:{seconds:1769573529 nanos:219896413}" Jan 28 04:12:09.317421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784-rootfs.mount: Deactivated successfully. Jan 28 04:12:09.728110 kubelet[2950]: I0128 04:12:09.728049 2950 scope.go:117] "RemoveContainer" containerID="d08bd70406db4ac2c8d06b8c40a418c9fa5ee528ec00655792706101628cb784" Jan 28 04:12:09.734402 containerd[1648]: time="2026-01-28T04:12:09.733517313Z" level=info msg="CreateContainer within sandbox \"e378687ae88e0eca8095435feb63efc169ed7c8a09d2de546aebfe89d5f025ca\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 28 04:12:09.750855 containerd[1648]: time="2026-01-28T04:12:09.747379498Z" level=info msg="Container 98b31ba6245465bd432ee6cb849c6b782d55d7b67e8f341310fda7ee555f8aa8: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:12:09.755468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023919032.mount: Deactivated successfully. Jan 28 04:12:09.777531 containerd[1648]: time="2026-01-28T04:12:09.777472056Z" level=info msg="CreateContainer within sandbox \"e378687ae88e0eca8095435feb63efc169ed7c8a09d2de546aebfe89d5f025ca\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"98b31ba6245465bd432ee6cb849c6b782d55d7b67e8f341310fda7ee555f8aa8\"" Jan 28 04:12:09.782304 containerd[1648]: time="2026-01-28T04:12:09.781300507Z" level=info msg="StartContainer for \"98b31ba6245465bd432ee6cb849c6b782d55d7b67e8f341310fda7ee555f8aa8\"" Jan 28 04:12:09.783128 containerd[1648]: time="2026-01-28T04:12:09.783098081Z" level=info msg="connecting to shim 98b31ba6245465bd432ee6cb849c6b782d55d7b67e8f341310fda7ee555f8aa8" address="unix:///run/containerd/s/a40e24225a3416a737c49523b38ca33bf4c928731eeb83976dcff7452762da97" protocol=ttrpc version=3 Jan 28 04:12:09.836037 systemd[1]: Started cri-containerd-98b31ba6245465bd432ee6cb849c6b782d55d7b67e8f341310fda7ee555f8aa8.scope - libcontainer container 98b31ba6245465bd432ee6cb849c6b782d55d7b67e8f341310fda7ee555f8aa8. Jan 28 04:12:09.924000 audit: BPF prog-id=155 op=LOAD Jan 28 04:12:09.931480 kernel: audit: type=1334 audit(1769573529.924:522): prog-id=155 op=LOAD Jan 28 04:12:09.935278 kernel: audit: type=1334 audit(1769573529.930:523): prog-id=156 op=LOAD Jan 28 04:12:09.930000 audit: BPF prog-id=156 op=LOAD Jan 28 04:12:09.930000 audit[3324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.945916 kernel: audit: type=1300 audit(1769573529.930:523): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.946051 kernel: audit: type=1327 audit(1769573529.930:523): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:09.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:09.947605 kernel: audit: type=1334 audit(1769573529.931:524): prog-id=156 op=UNLOAD Jan 28 04:12:09.931000 audit: BPF prog-id=156 op=UNLOAD Jan 28 04:12:09.952692 kernel: audit: type=1300 audit(1769573529.931:524): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.931000 audit[3324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.957952 kernel: audit: type=1327 audit(1769573529.931:524): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:09.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:09.931000 audit: BPF prog-id=157 op=LOAD Jan 28 04:12:09.931000 audit[3324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:09.931000 audit: BPF prog-id=158 op=LOAD Jan 28 04:12:09.931000 audit[3324]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:09.960289 kernel: audit: type=1334 audit(1769573529.931:525): prog-id=157 op=LOAD Jan 28 04:12:09.931000 audit: BPF prog-id=158 op=UNLOAD Jan 28 04:12:09.931000 audit[3324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:09.931000 audit: BPF prog-id=157 op=UNLOAD Jan 28 04:12:09.931000 audit[3324]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:09.932000 audit: BPF prog-id=159 op=LOAD Jan 28 04:12:09.932000 audit[3324]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3055 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:09.932000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938623331626136323435343635626434333265653663623834396336 Jan 28 04:12:10.018786 containerd[1648]: time="2026-01-28T04:12:10.018493220Z" level=info msg="StartContainer for \"98b31ba6245465bd432ee6cb849c6b782d55d7b67e8f341310fda7ee555f8aa8\" returns successfully" Jan 28 04:12:13.017000 audit[1948]: USER_END pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:12:13.018731 sudo[1948]: pam_unix(sudo:session): session closed for user root Jan 28 04:12:13.018000 audit[1948]: CRED_DISP pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 04:12:13.131546 sshd[1947]: Connection closed by 4.153.228.146 port 48200 Jan 28 04:12:13.135233 sshd-session[1943]: pam_unix(sshd:session): session closed for user core Jan 28 04:12:13.141000 audit[1943]: USER_END pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:12:13.142000 audit[1943]: CRED_DISP pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:12:13.149112 systemd[1]: sshd@8-10.230.66.102:22-4.153.228.146:48200.service: Deactivated successfully. Jan 28 04:12:13.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.66.102:22-4.153.228.146:48200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:12:13.158475 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 04:12:13.160411 systemd[1]: session-12.scope: Consumed 6.353s CPU time, 152.8M memory peak. Jan 28 04:12:13.169618 systemd-logind[1617]: Session 12 logged out. Waiting for processes to exit. Jan 28 04:12:13.172503 systemd-logind[1617]: Removed session 12. Jan 28 04:12:14.890938 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 28 04:12:14.891485 kernel: audit: type=1325 audit(1769573534.882:535): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:14.882000 audit[3387]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:14.882000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc98b283d0 a2=0 a3=7ffc98b283bc items=0 ppid=3100 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:14.903329 kernel: audit: type=1300 audit(1769573534.882:535): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc98b283d0 a2=0 a3=7ffc98b283bc items=0 ppid=3100 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:14.882000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:14.907278 kernel: audit: type=1327 audit(1769573534.882:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:14.895000 audit[3387]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:14.911276 kernel: audit: type=1325 audit(1769573534.895:536): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:14.895000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc98b283d0 a2=0 a3=0 items=0 ppid=3100 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:14.923279 kernel: audit: type=1300 audit(1769573534.895:536): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc98b283d0 a2=0 a3=0 items=0 ppid=3100 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:14.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:14.930291 kernel: audit: type=1327 audit(1769573534.895:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:15.156000 audit[3389]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:15.156000 audit[3389]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff36cae5a0 a2=0 a3=7fff36cae58c items=0 ppid=3100 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:15.164201 kernel: audit: type=1325 audit(1769573535.156:537): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:15.164322 kernel: audit: type=1300 audit(1769573535.156:537): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff36cae5a0 a2=0 a3=7fff36cae58c items=0 ppid=3100 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:15.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:15.172323 kernel: audit: type=1327 audit(1769573535.156:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:15.171000 audit[3389]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:15.176286 kernel: audit: type=1325 audit(1769573535.171:538): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:15.171000 audit[3389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff36cae5a0 a2=0 a3=0 items=0 ppid=3100 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:15.171000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:18.025000 audit[3392]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:18.025000 audit[3392]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd1e616e80 a2=0 a3=7ffd1e616e6c items=0 ppid=3100 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:18.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:18.032000 audit[3392]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:18.032000 audit[3392]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1e616e80 a2=0 a3=0 items=0 ppid=3100 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:18.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:18.090000 audit[3394]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:18.090000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffe36a8780 a2=0 a3=7fffe36a876c items=0 ppid=3100 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:18.090000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:18.095000 audit[3394]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:18.095000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe36a8780 a2=0 a3=0 items=0 ppid=3100 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:18.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:19.125000 audit[3396]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:19.125000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff4c5bcd50 a2=0 a3=7fff4c5bcd3c items=0 ppid=3100 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:19.125000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:19.129000 audit[3396]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:19.129000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4c5bcd50 a2=0 a3=0 items=0 ppid=3100 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:19.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:20.295000 audit[3398]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:20.303613 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 28 04:12:20.303784 kernel: audit: type=1325 audit(1769573540.295:545): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:20.295000 audit[3398]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffed8f16960 a2=0 a3=7ffed8f1694c items=0 ppid=3100 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:20.315349 kernel: audit: type=1300 audit(1769573540.295:545): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffed8f16960 a2=0 a3=7ffed8f1694c items=0 ppid=3100 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:20.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:20.322306 kernel: audit: type=1327 audit(1769573540.295:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:20.317000 audit[3398]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:20.327300 kernel: audit: type=1325 audit(1769573540.317:546): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:20.317000 audit[3398]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed8f16960 a2=0 a3=0 items=0 ppid=3100 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:20.334312 kernel: audit: type=1300 audit(1769573540.317:546): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed8f16960 a2=0 a3=0 items=0 ppid=3100 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:20.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:20.339283 kernel: audit: type=1327 audit(1769573540.317:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:20.372905 kubelet[2950]: I0128 04:12:20.371544 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/217960be-def3-46b9-9507-f83382dbbbd2-typha-certs\") pod \"calico-typha-55cb6ddb5f-9lwx4\" (UID: \"217960be-def3-46b9-9507-f83382dbbbd2\") " pod="calico-system/calico-typha-55cb6ddb5f-9lwx4" Jan 28 04:12:20.372905 kubelet[2950]: I0128 04:12:20.371617 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhdh\" (UniqueName: \"kubernetes.io/projected/217960be-def3-46b9-9507-f83382dbbbd2-kube-api-access-zzhdh\") pod \"calico-typha-55cb6ddb5f-9lwx4\" (UID: \"217960be-def3-46b9-9507-f83382dbbbd2\") " pod="calico-system/calico-typha-55cb6ddb5f-9lwx4" Jan 28 04:12:20.372905 kubelet[2950]: I0128 04:12:20.371674 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/217960be-def3-46b9-9507-f83382dbbbd2-tigera-ca-bundle\") pod \"calico-typha-55cb6ddb5f-9lwx4\" (UID: \"217960be-def3-46b9-9507-f83382dbbbd2\") " pod="calico-system/calico-typha-55cb6ddb5f-9lwx4" Jan 28 04:12:20.392069 systemd[1]: Created slice kubepods-besteffort-pod217960be_def3_46b9_9507_f83382dbbbd2.slice - libcontainer container kubepods-besteffort-pod217960be_def3_46b9_9507_f83382dbbbd2.slice. Jan 28 04:12:20.582553 systemd[1]: Created slice kubepods-besteffort-pod7a8659ed_56ac_4a6b_b267_8b7f5cbaa12d.slice - libcontainer container kubepods-besteffort-pod7a8659ed_56ac_4a6b_b267_8b7f5cbaa12d.slice. Jan 28 04:12:20.673776 kubelet[2950]: I0128 04:12:20.673674 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-cni-log-dir\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.673776 kubelet[2950]: I0128 04:12:20.673773 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-flexvol-driver-host\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674084 kubelet[2950]: I0128 04:12:20.673823 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-cni-net-dir\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674084 kubelet[2950]: I0128 04:12:20.673864 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-tigera-ca-bundle\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674084 kubelet[2950]: I0128 04:12:20.673897 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-var-lib-calico\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674084 kubelet[2950]: I0128 04:12:20.673927 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-xtables-lock\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674084 kubelet[2950]: I0128 04:12:20.673958 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-var-run-calico\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674648 kubelet[2950]: I0128 04:12:20.674008 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-node-certs\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674648 kubelet[2950]: I0128 04:12:20.674036 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nlxf\" (UniqueName: \"kubernetes.io/projected/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-kube-api-access-6nlxf\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674648 kubelet[2950]: I0128 04:12:20.674069 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-lib-modules\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674648 kubelet[2950]: I0128 04:12:20.674106 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-policysync\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.674648 kubelet[2950]: I0128 04:12:20.674150 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d-cni-bin-dir\") pod \"calico-node-zh7sc\" (UID: \"7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d\") " pod="calico-system/calico-node-zh7sc" Jan 28 04:12:20.719845 containerd[1648]: time="2026-01-28T04:12:20.719753149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55cb6ddb5f-9lwx4,Uid:217960be-def3-46b9-9507-f83382dbbbd2,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:20.772042 kubelet[2950]: E0128 04:12:20.771780 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:20.791725 kubelet[2950]: E0128 04:12:20.791614 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.792112 kubelet[2950]: W0128 04:12:20.792033 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.794371 kubelet[2950]: E0128 04:12:20.794311 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.827435 kubelet[2950]: E0128 04:12:20.826153 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.827435 kubelet[2950]: W0128 04:12:20.827382 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.827435 kubelet[2950]: E0128 04:12:20.827650 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.864186 kubelet[2950]: E0128 04:12:20.863352 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.864186 kubelet[2950]: W0128 04:12:20.863405 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.864186 kubelet[2950]: E0128 04:12:20.863436 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.866165 kubelet[2950]: E0128 04:12:20.866096 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.866165 kubelet[2950]: W0128 04:12:20.866137 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.866498 kubelet[2950]: E0128 04:12:20.866352 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.866802 kubelet[2950]: E0128 04:12:20.866781 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.866988 kubelet[2950]: W0128 04:12:20.866896 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.866988 kubelet[2950]: E0128 04:12:20.866921 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.867880 kubelet[2950]: E0128 04:12:20.867495 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.867880 kubelet[2950]: W0128 04:12:20.867516 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.867880 kubelet[2950]: E0128 04:12:20.867533 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.868393 kubelet[2950]: E0128 04:12:20.868210 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.869095 kubelet[2950]: W0128 04:12:20.868488 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.869095 kubelet[2950]: E0128 04:12:20.868513 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.869579 kubelet[2950]: E0128 04:12:20.869400 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.869579 kubelet[2950]: W0128 04:12:20.869420 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.869579 kubelet[2950]: E0128 04:12:20.869437 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.870585 kubelet[2950]: E0128 04:12:20.870231 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.870865 kubelet[2950]: W0128 04:12:20.870690 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.870865 kubelet[2950]: E0128 04:12:20.870717 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.871884 kubelet[2950]: E0128 04:12:20.871862 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.872003 kubelet[2950]: W0128 04:12:20.871981 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.872104 kubelet[2950]: E0128 04:12:20.872084 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.872934 kubelet[2950]: E0128 04:12:20.872784 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.872934 kubelet[2950]: W0128 04:12:20.872803 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.872934 kubelet[2950]: E0128 04:12:20.872819 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.873901 kubelet[2950]: E0128 04:12:20.873880 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.874016 kubelet[2950]: W0128 04:12:20.873995 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.874126 kubelet[2950]: E0128 04:12:20.874095 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.875306 kubelet[2950]: E0128 04:12:20.874641 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.875306 kubelet[2950]: W0128 04:12:20.874659 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.875306 kubelet[2950]: E0128 04:12:20.874675 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.875796 kubelet[2950]: E0128 04:12:20.875638 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.875796 kubelet[2950]: W0128 04:12:20.875658 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.875796 kubelet[2950]: E0128 04:12:20.875673 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.876813 kubelet[2950]: E0128 04:12:20.876309 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.876813 kubelet[2950]: W0128 04:12:20.876332 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.876813 kubelet[2950]: E0128 04:12:20.876350 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.877150 kubelet[2950]: E0128 04:12:20.877127 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.877481 kubelet[2950]: W0128 04:12:20.877234 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.877481 kubelet[2950]: E0128 04:12:20.877291 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.878031 kubelet[2950]: E0128 04:12:20.877885 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.878433 kubelet[2950]: W0128 04:12:20.878145 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.878433 kubelet[2950]: E0128 04:12:20.878173 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.878941 kubelet[2950]: E0128 04:12:20.878830 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.879159 kubelet[2950]: W0128 04:12:20.879134 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.879336 kubelet[2950]: E0128 04:12:20.879301 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.886749 kubelet[2950]: E0128 04:12:20.879974 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.886749 kubelet[2950]: W0128 04:12:20.879988 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.886749 kubelet[2950]: E0128 04:12:20.880005 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.886749 kubelet[2950]: E0128 04:12:20.880508 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.886749 kubelet[2950]: W0128 04:12:20.880534 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.886749 kubelet[2950]: E0128 04:12:20.880554 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.886749 kubelet[2950]: E0128 04:12:20.880991 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.886749 kubelet[2950]: W0128 04:12:20.881005 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.886749 kubelet[2950]: E0128 04:12:20.881247 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.886749 kubelet[2950]: E0128 04:12:20.881922 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.887724 kubelet[2950]: W0128 04:12:20.881937 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.887724 kubelet[2950]: E0128 04:12:20.881952 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.887724 kubelet[2950]: E0128 04:12:20.882976 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.887724 kubelet[2950]: W0128 04:12:20.883009 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.887724 kubelet[2950]: E0128 04:12:20.883037 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.887724 kubelet[2950]: I0128 04:12:20.883085 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqd7k\" (UniqueName: \"kubernetes.io/projected/c2a88baa-8755-4a0f-b81e-f2ef466fcd2d-kube-api-access-xqd7k\") pod \"csi-node-driver-bnlhb\" (UID: \"c2a88baa-8755-4a0f-b81e-f2ef466fcd2d\") " pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:20.887724 kubelet[2950]: E0128 04:12:20.884579 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.887724 kubelet[2950]: W0128 04:12:20.884597 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.887724 kubelet[2950]: E0128 04:12:20.884625 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.888086 kubelet[2950]: I0128 04:12:20.884666 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2a88baa-8755-4a0f-b81e-f2ef466fcd2d-registration-dir\") pod \"csi-node-driver-bnlhb\" (UID: \"c2a88baa-8755-4a0f-b81e-f2ef466fcd2d\") " pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:20.888086 kubelet[2950]: E0128 04:12:20.885085 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.888086 kubelet[2950]: W0128 04:12:20.885102 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.888086 kubelet[2950]: E0128 04:12:20.885214 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.888086 kubelet[2950]: I0128 04:12:20.886036 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c2a88baa-8755-4a0f-b81e-f2ef466fcd2d-varrun\") pod \"csi-node-driver-bnlhb\" (UID: \"c2a88baa-8755-4a0f-b81e-f2ef466fcd2d\") " pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:20.888086 kubelet[2950]: E0128 04:12:20.886337 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.888086 kubelet[2950]: W0128 04:12:20.886358 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.888086 kubelet[2950]: E0128 04:12:20.886481 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.888086 kubelet[2950]: E0128 04:12:20.886914 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.888937 kubelet[2950]: W0128 04:12:20.886928 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.888937 kubelet[2950]: E0128 04:12:20.887367 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.888937 kubelet[2950]: E0128 04:12:20.888220 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.888937 kubelet[2950]: W0128 04:12:20.888235 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.888937 kubelet[2950]: E0128 04:12:20.888471 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.888937 kubelet[2950]: I0128 04:12:20.888513 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2a88baa-8755-4a0f-b81e-f2ef466fcd2d-kubelet-dir\") pod \"csi-node-driver-bnlhb\" (UID: \"c2a88baa-8755-4a0f-b81e-f2ef466fcd2d\") " pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:20.889845 kubelet[2950]: E0128 04:12:20.889311 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.889845 kubelet[2950]: W0128 04:12:20.889326 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.889845 kubelet[2950]: E0128 04:12:20.889370 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.889845 kubelet[2950]: E0128 04:12:20.889600 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.889845 kubelet[2950]: W0128 04:12:20.889614 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.889845 kubelet[2950]: E0128 04:12:20.889628 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.890414 kubelet[2950]: E0128 04:12:20.890289 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.890414 kubelet[2950]: W0128 04:12:20.890308 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.890414 kubelet[2950]: E0128 04:12:20.890324 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.890911 kubelet[2950]: E0128 04:12:20.890845 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.890911 kubelet[2950]: W0128 04:12:20.890863 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.890911 kubelet[2950]: E0128 04:12:20.890887 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.891980 kubelet[2950]: E0128 04:12:20.891944 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.892504 kubelet[2950]: W0128 04:12:20.892083 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.892580 containerd[1648]: time="2026-01-28T04:12:20.892420544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zh7sc,Uid:7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:20.892655 kubelet[2950]: E0128 04:12:20.892581 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.892655 kubelet[2950]: W0128 04:12:20.892599 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.892655 kubelet[2950]: E0128 04:12:20.892618 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.893091 kubelet[2950]: E0128 04:12:20.892127 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.893091 kubelet[2950]: I0128 04:12:20.892874 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2a88baa-8755-4a0f-b81e-f2ef466fcd2d-socket-dir\") pod \"csi-node-driver-bnlhb\" (UID: \"c2a88baa-8755-4a0f-b81e-f2ef466fcd2d\") " pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:20.893091 kubelet[2950]: E0128 04:12:20.892882 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.893091 kubelet[2950]: W0128 04:12:20.892898 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.893091 kubelet[2950]: E0128 04:12:20.892914 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.893619 kubelet[2950]: E0128 04:12:20.893338 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.893619 kubelet[2950]: W0128 04:12:20.893354 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.893619 kubelet[2950]: E0128 04:12:20.893370 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.893775 kubelet[2950]: E0128 04:12:20.893678 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.893775 kubelet[2950]: W0128 04:12:20.893693 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.893775 kubelet[2950]: E0128 04:12:20.893708 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.995112 kubelet[2950]: E0128 04:12:20.995024 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.995112 kubelet[2950]: W0128 04:12:20.995102 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.995112 kubelet[2950]: E0128 04:12:20.995181 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.995673 kubelet[2950]: E0128 04:12:20.995625 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.995781 kubelet[2950]: W0128 04:12:20.995750 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.995849 kubelet[2950]: E0128 04:12:20.995792 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.996238 kubelet[2950]: E0128 04:12:20.996214 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.996238 kubelet[2950]: W0128 04:12:20.996237 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.996366 kubelet[2950]: E0128 04:12:20.996305 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.996691 kubelet[2950]: E0128 04:12:20.996669 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.996691 kubelet[2950]: W0128 04:12:20.996689 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.996915 kubelet[2950]: E0128 04:12:20.996772 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.997012 kubelet[2950]: E0128 04:12:20.996977 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.997085 kubelet[2950]: W0128 04:12:20.997018 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.997242 kubelet[2950]: E0128 04:12:20.997179 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.997458 kubelet[2950]: E0128 04:12:20.997437 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.997458 kubelet[2950]: W0128 04:12:20.997458 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.997659 kubelet[2950]: E0128 04:12:20.997559 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.997812 kubelet[2950]: E0128 04:12:20.997792 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.997873 kubelet[2950]: W0128 04:12:20.997833 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.997873 kubelet[2950]: E0128 04:12:20.997860 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.998323 kubelet[2950]: E0128 04:12:20.998299 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.998323 kubelet[2950]: W0128 04:12:20.998320 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.998501 kubelet[2950]: E0128 04:12:20.998428 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.998696 kubelet[2950]: E0128 04:12:20.998653 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.998696 kubelet[2950]: W0128 04:12:20.998672 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.998922 kubelet[2950]: E0128 04:12:20.998790 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.999145 kubelet[2950]: E0128 04:12:20.998983 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.999145 kubelet[2950]: W0128 04:12:20.998997 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.999145 kubelet[2950]: E0128 04:12:20.999086 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.999404 kubelet[2950]: E0128 04:12:20.999337 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.999404 kubelet[2950]: W0128 04:12:20.999351 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.999707 kubelet[2950]: E0128 04:12:20.999636 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:20.999788 kubelet[2950]: E0128 04:12:20.999761 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:20.999788 kubelet[2950]: W0128 04:12:20.999775 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:20.999942 kubelet[2950]: E0128 04:12:20.999813 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.000491 kubelet[2950]: E0128 04:12:21.000468 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.000491 kubelet[2950]: W0128 04:12:21.000487 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.000783 kubelet[2950]: E0128 04:12:21.000579 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.000783 kubelet[2950]: E0128 04:12:21.000725 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.000783 kubelet[2950]: W0128 04:12:21.000738 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.001150 kubelet[2950]: E0128 04:12:21.000830 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.001150 kubelet[2950]: E0128 04:12:21.000964 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.001150 kubelet[2950]: W0128 04:12:21.000978 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.001150 kubelet[2950]: E0128 04:12:21.001010 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.001472 kubelet[2950]: E0128 04:12:21.001239 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.001472 kubelet[2950]: W0128 04:12:21.001253 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.001472 kubelet[2950]: E0128 04:12:21.001319 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.001607 kubelet[2950]: E0128 04:12:21.001594 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.001725 kubelet[2950]: W0128 04:12:21.001608 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.001725 kubelet[2950]: E0128 04:12:21.001649 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.001996 kubelet[2950]: E0128 04:12:21.001913 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.001996 kubelet[2950]: W0128 04:12:21.001927 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.001996 kubelet[2950]: E0128 04:12:21.001949 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.002268 kubelet[2950]: E0128 04:12:21.002230 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.002268 kubelet[2950]: W0128 04:12:21.002251 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.002522 kubelet[2950]: E0128 04:12:21.002304 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.002670 kubelet[2950]: E0128 04:12:21.002551 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.002670 kubelet[2950]: W0128 04:12:21.002565 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.002670 kubelet[2950]: E0128 04:12:21.002600 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.002972 kubelet[2950]: E0128 04:12:21.002777 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.002972 kubelet[2950]: W0128 04:12:21.002790 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.002972 kubelet[2950]: E0128 04:12:21.002838 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.003105 kubelet[2950]: E0128 04:12:21.003034 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.003105 kubelet[2950]: W0128 04:12:21.003047 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.003398 kubelet[2950]: E0128 04:12:21.003331 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.003398 kubelet[2950]: W0128 04:12:21.003344 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.003607 kubelet[2950]: E0128 04:12:21.003582 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.003607 kubelet[2950]: W0128 04:12:21.003601 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.003702 kubelet[2950]: E0128 04:12:21.003618 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.004218 kubelet[2950]: E0128 04:12:21.003760 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.004218 kubelet[2950]: E0128 04:12:21.003965 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.005060 kubelet[2950]: E0128 04:12:21.005037 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.005060 kubelet[2950]: W0128 04:12:21.005057 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.005229 kubelet[2950]: E0128 04:12:21.005073 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.049875 kubelet[2950]: E0128 04:12:21.049824 2950 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 04:12:21.051187 kubelet[2950]: W0128 04:12:21.050136 2950 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 04:12:21.051187 kubelet[2950]: E0128 04:12:21.050181 2950 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 04:12:21.102392 containerd[1648]: time="2026-01-28T04:12:21.101604129Z" level=info msg="connecting to shim 65e5a03789b5cf48e0761b5071052028d26e222424d379893b65c61d1c5418ac" address="unix:///run/containerd/s/281ef21e5e5c5857a39074456a49e7e98922f88819c220037d3eba140a113983" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:12:21.138767 containerd[1648]: time="2026-01-28T04:12:21.138552952Z" level=info msg="connecting to shim 61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88" address="unix:///run/containerd/s/9be2319cd126e54d4f09d6b931ceae104975c48ba3bb205fc59bc15b45e7c81a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:12:21.172722 systemd[1]: Started cri-containerd-65e5a03789b5cf48e0761b5071052028d26e222424d379893b65c61d1c5418ac.scope - libcontainer container 65e5a03789b5cf48e0761b5071052028d26e222424d379893b65c61d1c5418ac. Jan 28 04:12:21.208954 systemd[1]: Started cri-containerd-61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88.scope - libcontainer container 61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88. Jan 28 04:12:21.255000 audit: BPF prog-id=160 op=LOAD Jan 28 04:12:21.266369 kernel: audit: type=1334 audit(1769573541.255:547): prog-id=160 op=LOAD Jan 28 04:12:21.270000 audit: BPF prog-id=161 op=LOAD Jan 28 04:12:21.270000 audit[3494]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3482 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.274458 kernel: audit: type=1334 audit(1769573541.270:548): prog-id=161 op=LOAD Jan 28 04:12:21.274602 kernel: audit: type=1300 audit(1769573541.270:548): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3482 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653561303337383962356366343865303736316235303731303532 Jan 28 04:12:21.284428 kernel: audit: type=1327 audit(1769573541.270:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653561303337383962356366343865303736316235303731303532 Jan 28 04:12:21.277000 audit: BPF prog-id=161 op=UNLOAD Jan 28 04:12:21.277000 audit[3494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.277000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653561303337383962356366343865303736316235303731303532 Jan 28 04:12:21.283000 audit: BPF prog-id=162 op=LOAD Jan 28 04:12:21.283000 audit[3494]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3482 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653561303337383962356366343865303736316235303731303532 Jan 28 04:12:21.286000 audit: BPF prog-id=163 op=LOAD Jan 28 04:12:21.286000 audit[3494]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3482 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653561303337383962356366343865303736316235303731303532 Jan 28 04:12:21.288000 audit: BPF prog-id=163 op=UNLOAD Jan 28 04:12:21.288000 audit[3494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653561303337383962356366343865303736316235303731303532 Jan 28 04:12:21.289000 audit: BPF prog-id=162 op=UNLOAD Jan 28 04:12:21.289000 audit[3494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653561303337383962356366343865303736316235303731303532 Jan 28 04:12:21.289000 audit: BPF prog-id=164 op=LOAD Jan 28 04:12:21.289000 audit[3494]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3482 pid=3494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635653561303337383962356366343865303736316235303731303532 Jan 28 04:12:21.292000 audit: BPF prog-id=165 op=LOAD Jan 28 04:12:21.294000 audit: BPF prog-id=166 op=LOAD Jan 28 04:12:21.294000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3509 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631653161336436633461356131363737383063646464333365613436 Jan 28 04:12:21.296000 audit: BPF prog-id=166 op=UNLOAD Jan 28 04:12:21.296000 audit[3525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631653161336436633461356131363737383063646464333365613436 Jan 28 04:12:21.296000 audit: BPF prog-id=167 op=LOAD Jan 28 04:12:21.296000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3509 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631653161336436633461356131363737383063646464333365613436 Jan 28 04:12:21.298000 audit: BPF prog-id=168 op=LOAD Jan 28 04:12:21.298000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3509 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631653161336436633461356131363737383063646464333365613436 Jan 28 04:12:21.298000 audit: BPF prog-id=168 op=UNLOAD Jan 28 04:12:21.298000 audit[3525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631653161336436633461356131363737383063646464333365613436 Jan 28 04:12:21.298000 audit: BPF prog-id=167 op=UNLOAD Jan 28 04:12:21.298000 audit[3525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.298000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631653161336436633461356131363737383063646464333365613436 Jan 28 04:12:21.300000 audit: BPF prog-id=169 op=LOAD Jan 28 04:12:21.300000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3509 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631653161336436633461356131363737383063646464333365613436 Jan 28 04:12:21.385519 containerd[1648]: time="2026-01-28T04:12:21.385394380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zh7sc,Uid:7a8659ed-56ac-4a6b-b267-8b7f5cbaa12d,Namespace:calico-system,Attempt:0,} returns sandbox id \"61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88\"" Jan 28 04:12:21.386000 audit[3559]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:21.386000 audit[3559]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcffe4a180 a2=0 a3=7ffcffe4a16c items=0 ppid=3100 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.386000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:21.392805 containerd[1648]: time="2026-01-28T04:12:21.391423253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 04:12:21.390000 audit[3559]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:21.390000 audit[3559]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcffe4a180 a2=0 a3=0 items=0 ppid=3100 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:21.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:21.448336 containerd[1648]: time="2026-01-28T04:12:21.448240760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55cb6ddb5f-9lwx4,Uid:217960be-def3-46b9-9507-f83382dbbbd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"65e5a03789b5cf48e0761b5071052028d26e222424d379893b65c61d1c5418ac\"" Jan 28 04:12:22.612937 kubelet[2950]: E0128 04:12:22.612828 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:23.157372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714454818.mount: Deactivated successfully. Jan 28 04:12:23.419011 containerd[1648]: time="2026-01-28T04:12:23.418820968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:23.420832 containerd[1648]: time="2026-01-28T04:12:23.420792223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5937317" Jan 28 04:12:23.422424 containerd[1648]: time="2026-01-28T04:12:23.422368013Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:23.455704 containerd[1648]: time="2026-01-28T04:12:23.455611429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:23.457923 containerd[1648]: time="2026-01-28T04:12:23.457762147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.064871055s" Jan 28 04:12:23.457923 containerd[1648]: time="2026-01-28T04:12:23.457806288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 04:12:23.460782 containerd[1648]: time="2026-01-28T04:12:23.460722637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 04:12:23.465630 containerd[1648]: time="2026-01-28T04:12:23.465490593Z" level=info msg="CreateContainer within sandbox \"61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 04:12:23.513304 containerd[1648]: time="2026-01-28T04:12:23.512591172Z" level=info msg="Container 19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:12:23.520573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2787396994.mount: Deactivated successfully. Jan 28 04:12:23.538741 containerd[1648]: time="2026-01-28T04:12:23.538655417Z" level=info msg="CreateContainer within sandbox \"61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e\"" Jan 28 04:12:23.540820 containerd[1648]: time="2026-01-28T04:12:23.540678928Z" level=info msg="StartContainer for \"19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e\"" Jan 28 04:12:23.545974 containerd[1648]: time="2026-01-28T04:12:23.545898729Z" level=info msg="connecting to shim 19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e" address="unix:///run/containerd/s/9be2319cd126e54d4f09d6b931ceae104975c48ba3bb205fc59bc15b45e7c81a" protocol=ttrpc version=3 Jan 28 04:12:23.598560 systemd[1]: Started cri-containerd-19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e.scope - libcontainer container 19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e. Jan 28 04:12:23.728000 audit: BPF prog-id=170 op=LOAD Jan 28 04:12:23.728000 audit[3574]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3509 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:23.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139656232316332326337656462313833373263383166363764656562 Jan 28 04:12:23.728000 audit: BPF prog-id=171 op=LOAD Jan 28 04:12:23.728000 audit[3574]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3509 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:23.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139656232316332326337656462313833373263383166363764656562 Jan 28 04:12:23.728000 audit: BPF prog-id=171 op=UNLOAD Jan 28 04:12:23.728000 audit[3574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:23.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139656232316332326337656462313833373263383166363764656562 Jan 28 04:12:23.728000 audit: BPF prog-id=170 op=UNLOAD Jan 28 04:12:23.728000 audit[3574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:23.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139656232316332326337656462313833373263383166363764656562 Jan 28 04:12:23.728000 audit: BPF prog-id=172 op=LOAD Jan 28 04:12:23.728000 audit[3574]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3509 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:23.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139656232316332326337656462313833373263383166363764656562 Jan 28 04:12:23.787280 containerd[1648]: time="2026-01-28T04:12:23.786992008Z" level=info msg="StartContainer for \"19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e\" returns successfully" Jan 28 04:12:23.812114 systemd[1]: cri-containerd-19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e.scope: Deactivated successfully. Jan 28 04:12:23.816000 audit: BPF prog-id=172 op=UNLOAD Jan 28 04:12:23.824461 containerd[1648]: time="2026-01-28T04:12:23.824398390Z" level=info msg="received container exit event container_id:\"19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e\" id:\"19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e\" pid:3586 exited_at:{seconds:1769573543 nanos:823726240}" Jan 28 04:12:24.037183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19eb21c22c7edb18372c81f67deeb8178000a88bab9cfc2d2c555c1ddedb483e-rootfs.mount: Deactivated successfully. Jan 28 04:12:24.614087 kubelet[2950]: E0128 04:12:24.612892 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:26.612653 kubelet[2950]: E0128 04:12:26.612598 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:28.005292 containerd[1648]: time="2026-01-28T04:12:28.004986072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:28.007527 containerd[1648]: time="2026-01-28T04:12:28.007481123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 28 04:12:28.008570 containerd[1648]: time="2026-01-28T04:12:28.008354176Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:28.041571 containerd[1648]: time="2026-01-28T04:12:28.041489119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:28.042637 containerd[1648]: time="2026-01-28T04:12:28.042565072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.581475253s" Jan 28 04:12:28.042637 containerd[1648]: time="2026-01-28T04:12:28.042607869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 04:12:28.045469 containerd[1648]: time="2026-01-28T04:12:28.045344415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 04:12:28.085072 containerd[1648]: time="2026-01-28T04:12:28.084141301Z" level=info msg="CreateContainer within sandbox \"65e5a03789b5cf48e0761b5071052028d26e222424d379893b65c61d1c5418ac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 04:12:28.114661 containerd[1648]: time="2026-01-28T04:12:28.114612877Z" level=info msg="Container bfdf6c335d7b080e454eeb7be467675f1e67b0bc956f039c63b1ae843d998467: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:12:28.123875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028009221.mount: Deactivated successfully. Jan 28 04:12:28.132635 containerd[1648]: time="2026-01-28T04:12:28.132535955Z" level=info msg="CreateContainer within sandbox \"65e5a03789b5cf48e0761b5071052028d26e222424d379893b65c61d1c5418ac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bfdf6c335d7b080e454eeb7be467675f1e67b0bc956f039c63b1ae843d998467\"" Jan 28 04:12:28.134073 containerd[1648]: time="2026-01-28T04:12:28.133839441Z" level=info msg="StartContainer for \"bfdf6c335d7b080e454eeb7be467675f1e67b0bc956f039c63b1ae843d998467\"" Jan 28 04:12:28.135918 containerd[1648]: time="2026-01-28T04:12:28.135888451Z" level=info msg="connecting to shim bfdf6c335d7b080e454eeb7be467675f1e67b0bc956f039c63b1ae843d998467" address="unix:///run/containerd/s/281ef21e5e5c5857a39074456a49e7e98922f88819c220037d3eba140a113983" protocol=ttrpc version=3 Jan 28 04:12:28.175541 systemd[1]: Started cri-containerd-bfdf6c335d7b080e454eeb7be467675f1e67b0bc956f039c63b1ae843d998467.scope - libcontainer container bfdf6c335d7b080e454eeb7be467675f1e67b0bc956f039c63b1ae843d998467. Jan 28 04:12:28.207000 audit: BPF prog-id=173 op=LOAD Jan 28 04:12:28.216277 kernel: kauditd_printk_skb: 62 callbacks suppressed Jan 28 04:12:28.216426 kernel: audit: type=1334 audit(1769573548.207:571): prog-id=173 op=LOAD Jan 28 04:12:28.219000 audit: BPF prog-id=174 op=LOAD Jan 28 04:12:28.219000 audit[3628]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.223298 kernel: audit: type=1334 audit(1769573548.219:572): prog-id=174 op=LOAD Jan 28 04:12:28.223393 kernel: audit: type=1300 audit(1769573548.219:572): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.233296 kernel: audit: type=1327 audit(1769573548.219:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.220000 audit: BPF prog-id=174 op=UNLOAD Jan 28 04:12:28.220000 audit[3628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.237914 kernel: audit: type=1334 audit(1769573548.220:573): prog-id=174 op=UNLOAD Jan 28 04:12:28.238009 kernel: audit: type=1300 audit(1769573548.220:573): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.242968 kernel: audit: type=1327 audit(1769573548.220:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.221000 audit: BPF prog-id=175 op=LOAD Jan 28 04:12:28.246650 kernel: audit: type=1334 audit(1769573548.221:574): prog-id=175 op=LOAD Jan 28 04:12:28.221000 audit[3628]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.249196 kernel: audit: type=1300 audit(1769573548.221:574): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.255055 kernel: audit: type=1327 audit(1769573548.221:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.222000 audit: BPF prog-id=176 op=LOAD Jan 28 04:12:28.222000 audit[3628]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.222000 audit: BPF prog-id=176 op=UNLOAD Jan 28 04:12:28.222000 audit[3628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.222000 audit: BPF prog-id=175 op=UNLOAD Jan 28 04:12:28.222000 audit[3628]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.222000 audit: BPF prog-id=177 op=LOAD Jan 28 04:12:28.222000 audit[3628]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3482 pid=3628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:28.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266646636633333356437623038306534353465656237626534363736 Jan 28 04:12:28.317545 containerd[1648]: time="2026-01-28T04:12:28.317453647Z" level=info msg="StartContainer for \"bfdf6c335d7b080e454eeb7be467675f1e67b0bc956f039c63b1ae843d998467\" returns successfully" Jan 28 04:12:28.614211 kubelet[2950]: E0128 04:12:28.613576 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:28.878339 kubelet[2950]: I0128 04:12:28.877332 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55cb6ddb5f-9lwx4" podStartSLOduration=2.284219656 podStartE2EDuration="8.877073961s" podCreationTimestamp="2026-01-28 04:12:20 +0000 UTC" firstStartedPulling="2026-01-28 04:12:21.451607827 +0000 UTC m=+27.063561597" lastFinishedPulling="2026-01-28 04:12:28.044462119 +0000 UTC m=+33.656415902" observedRunningTime="2026-01-28 04:12:28.876786122 +0000 UTC m=+34.488739927" watchObservedRunningTime="2026-01-28 04:12:28.877073961 +0000 UTC m=+34.489027747" Jan 28 04:12:29.867757 kubelet[2950]: I0128 04:12:29.867531 2950 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 04:12:30.613873 kubelet[2950]: E0128 04:12:30.613809 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:32.615562 kubelet[2950]: E0128 04:12:32.615245 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:34.613195 kubelet[2950]: E0128 04:12:34.613048 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:35.529737 containerd[1648]: time="2026-01-28T04:12:35.529672746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:35.531041 containerd[1648]: time="2026-01-28T04:12:35.531008584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 28 04:12:35.531758 containerd[1648]: time="2026-01-28T04:12:35.531656007Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:35.534761 containerd[1648]: time="2026-01-28T04:12:35.534347782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:35.535822 containerd[1648]: time="2026-01-28T04:12:35.535455181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.489869896s" Jan 28 04:12:35.535822 containerd[1648]: time="2026-01-28T04:12:35.535502292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 04:12:35.539723 containerd[1648]: time="2026-01-28T04:12:35.539582646Z" level=info msg="CreateContainer within sandbox \"61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 04:12:35.557352 containerd[1648]: time="2026-01-28T04:12:35.555802438Z" level=info msg="Container 1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:12:35.560595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169502628.mount: Deactivated successfully. Jan 28 04:12:35.580656 containerd[1648]: time="2026-01-28T04:12:35.580588340Z" level=info msg="CreateContainer within sandbox \"61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192\"" Jan 28 04:12:35.582185 containerd[1648]: time="2026-01-28T04:12:35.582138541Z" level=info msg="StartContainer for \"1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192\"" Jan 28 04:12:35.586933 containerd[1648]: time="2026-01-28T04:12:35.586884992Z" level=info msg="connecting to shim 1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192" address="unix:///run/containerd/s/9be2319cd126e54d4f09d6b931ceae104975c48ba3bb205fc59bc15b45e7c81a" protocol=ttrpc version=3 Jan 28 04:12:35.673544 systemd[1]: Started cri-containerd-1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192.scope - libcontainer container 1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192. Jan 28 04:12:35.752000 audit: BPF prog-id=178 op=LOAD Jan 28 04:12:35.761527 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 04:12:35.761691 kernel: audit: type=1334 audit(1769573555.752:579): prog-id=178 op=LOAD Jan 28 04:12:35.752000 audit[3676]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3509 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:35.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346165623232346632376534633238303537366262336136343733 Jan 28 04:12:35.770646 kernel: audit: type=1300 audit(1769573555.752:579): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3509 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:35.770737 kernel: audit: type=1327 audit(1769573555.752:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346165623232346632376534633238303537366262336136343733 Jan 28 04:12:35.752000 audit: BPF prog-id=179 op=LOAD Jan 28 04:12:35.776327 kernel: audit: type=1334 audit(1769573555.752:580): prog-id=179 op=LOAD Jan 28 04:12:35.776637 kernel: audit: type=1300 audit(1769573555.752:580): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3509 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:35.752000 audit[3676]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3509 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:35.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346165623232346632376534633238303537366262336136343733 Jan 28 04:12:35.783528 kernel: audit: type=1327 audit(1769573555.752:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346165623232346632376534633238303537366262336136343733 Jan 28 04:12:35.752000 audit: BPF prog-id=179 op=UNLOAD Jan 28 04:12:35.788062 kernel: audit: type=1334 audit(1769573555.752:581): prog-id=179 op=UNLOAD Jan 28 04:12:35.752000 audit[3676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:35.794292 kernel: audit: type=1300 audit(1769573555.752:581): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:35.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346165623232346632376534633238303537366262336136343733 Jan 28 04:12:35.800286 kernel: audit: type=1327 audit(1769573555.752:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346165623232346632376534633238303537366262336136343733 Jan 28 04:12:35.811280 kernel: audit: type=1334 audit(1769573555.752:582): prog-id=178 op=UNLOAD Jan 28 04:12:35.752000 audit: BPF prog-id=178 op=UNLOAD Jan 28 04:12:35.752000 audit[3676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:35.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346165623232346632376534633238303537366262336136343733 Jan 28 04:12:35.752000 audit: BPF prog-id=180 op=LOAD Jan 28 04:12:35.752000 audit[3676]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3509 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:35.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164346165623232346632376534633238303537366262336136343733 Jan 28 04:12:35.860744 containerd[1648]: time="2026-01-28T04:12:35.860569658Z" level=info msg="StartContainer for \"1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192\" returns successfully" Jan 28 04:12:36.613290 kubelet[2950]: E0128 04:12:36.612915 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:36.956478 systemd[1]: cri-containerd-1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192.scope: Deactivated successfully. Jan 28 04:12:36.957054 systemd[1]: cri-containerd-1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192.scope: Consumed 809ms CPU time, 158.6M memory peak, 6.8M read from disk, 171.3M written to disk. Jan 28 04:12:36.961000 audit: BPF prog-id=180 op=UNLOAD Jan 28 04:12:36.992080 containerd[1648]: time="2026-01-28T04:12:36.990823113Z" level=info msg="received container exit event container_id:\"1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192\" id:\"1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192\" pid:3688 exited_at:{seconds:1769573556 nanos:990178071}" Jan 28 04:12:37.038338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d4aeb224f27e4c280576bb3a6473be27c4e133234e4d89d48ca6f7c6587e192-rootfs.mount: Deactivated successfully. Jan 28 04:12:37.051798 kubelet[2950]: I0128 04:12:37.051764 2950 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 04:12:37.258368 systemd[1]: Created slice kubepods-burstable-pod128fdf3d_9296_4653_81bd_f8134dc33789.slice - libcontainer container kubepods-burstable-pod128fdf3d_9296_4653_81bd_f8134dc33789.slice. Jan 28 04:12:37.286405 systemd[1]: Created slice kubepods-besteffort-pod53f99505_aca3_4278_8799_01f0eba5681f.slice - libcontainer container kubepods-besteffort-pod53f99505_aca3_4278_8799_01f0eba5681f.slice. Jan 28 04:12:37.301003 systemd[1]: Created slice kubepods-besteffort-poda75b9d4b_fd28_4515_89fe_b1c194b4eb55.slice - libcontainer container kubepods-besteffort-poda75b9d4b_fd28_4515_89fe_b1c194b4eb55.slice. Jan 28 04:12:37.320006 systemd[1]: Created slice kubepods-burstable-pod1c614f8b_7e15_4f62_a1d7_df2d998fe9fb.slice - libcontainer container kubepods-burstable-pod1c614f8b_7e15_4f62_a1d7_df2d998fe9fb.slice. Jan 28 04:12:37.335428 systemd[1]: Created slice kubepods-besteffort-podf4a30036_8006_4d4f_855c_5cae3c37a049.slice - libcontainer container kubepods-besteffort-podf4a30036_8006_4d4f_855c_5cae3c37a049.slice. Jan 28 04:12:37.351096 systemd[1]: Created slice kubepods-besteffort-podab23ab24_7e12_4864_a3ee_8b4882a74a22.slice - libcontainer container kubepods-besteffort-podab23ab24_7e12_4864_a3ee_8b4882a74a22.slice. Jan 28 04:12:37.361944 systemd[1]: Created slice kubepods-besteffort-podd08f7533_ee5b_4a11_b707_6aef7c12a55d.slice - libcontainer container kubepods-besteffort-podd08f7533_ee5b_4a11_b707_6aef7c12a55d.slice. Jan 28 04:12:37.374358 kubelet[2950]: I0128 04:12:37.372905 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53f99505-aca3-4278-8799-01f0eba5681f-tigera-ca-bundle\") pod \"calico-kube-controllers-5877564c64-ssm6r\" (UID: \"53f99505-aca3-4278-8799-01f0eba5681f\") " pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" Jan 28 04:12:37.375644 kubelet[2950]: I0128 04:12:37.375602 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d08f7533-ee5b-4a11-b707-6aef7c12a55d-config\") pod \"goldmane-666569f655-xf9j7\" (UID: \"d08f7533-ee5b-4a11-b707-6aef7c12a55d\") " pod="calico-system/goldmane-666569f655-xf9j7" Jan 28 04:12:37.376017 kubelet[2950]: I0128 04:12:37.375656 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d08f7533-ee5b-4a11-b707-6aef7c12a55d-goldmane-ca-bundle\") pod \"goldmane-666569f655-xf9j7\" (UID: \"d08f7533-ee5b-4a11-b707-6aef7c12a55d\") " pod="calico-system/goldmane-666569f655-xf9j7" Jan 28 04:12:37.376017 kubelet[2950]: I0128 04:12:37.375686 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4a30036-8006-4d4f-855c-5cae3c37a049-whisker-backend-key-pair\") pod \"whisker-8595fc9944-khdch\" (UID: \"f4a30036-8006-4d4f-855c-5cae3c37a049\") " pod="calico-system/whisker-8595fc9944-khdch" Jan 28 04:12:37.376017 kubelet[2950]: I0128 04:12:37.375721 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65ns\" (UniqueName: \"kubernetes.io/projected/53f99505-aca3-4278-8799-01f0eba5681f-kube-api-access-j65ns\") pod \"calico-kube-controllers-5877564c64-ssm6r\" (UID: \"53f99505-aca3-4278-8799-01f0eba5681f\") " pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" Jan 28 04:12:37.376017 kubelet[2950]: I0128 04:12:37.375752 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d08f7533-ee5b-4a11-b707-6aef7c12a55d-goldmane-key-pair\") pod \"goldmane-666569f655-xf9j7\" (UID: \"d08f7533-ee5b-4a11-b707-6aef7c12a55d\") " pod="calico-system/goldmane-666569f655-xf9j7" Jan 28 04:12:37.376017 kubelet[2950]: I0128 04:12:37.375800 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab23ab24-7e12-4864-a3ee-8b4882a74a22-calico-apiserver-certs\") pod \"calico-apiserver-67db4dc4b5-8mhgz\" (UID: \"ab23ab24-7e12-4864-a3ee-8b4882a74a22\") " pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" Jan 28 04:12:37.376287 kubelet[2950]: I0128 04:12:37.375834 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55sg\" (UniqueName: \"kubernetes.io/projected/a75b9d4b-fd28-4515-89fe-b1c194b4eb55-kube-api-access-r55sg\") pod \"calico-apiserver-67db4dc4b5-cstcv\" (UID: \"a75b9d4b-fd28-4515-89fe-b1c194b4eb55\") " pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" Jan 28 04:12:37.376287 kubelet[2950]: I0128 04:12:37.375879 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c614f8b-7e15-4f62-a1d7-df2d998fe9fb-config-volume\") pod \"coredns-668d6bf9bc-sskjw\" (UID: \"1c614f8b-7e15-4f62-a1d7-df2d998fe9fb\") " pod="kube-system/coredns-668d6bf9bc-sskjw" Jan 28 04:12:37.376287 kubelet[2950]: I0128 04:12:37.375907 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6z88\" (UniqueName: \"kubernetes.io/projected/d08f7533-ee5b-4a11-b707-6aef7c12a55d-kube-api-access-b6z88\") pod \"goldmane-666569f655-xf9j7\" (UID: \"d08f7533-ee5b-4a11-b707-6aef7c12a55d\") " pod="calico-system/goldmane-666569f655-xf9j7" Jan 28 04:12:37.376287 kubelet[2950]: I0128 04:12:37.375936 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6p6n\" (UniqueName: \"kubernetes.io/projected/ab23ab24-7e12-4864-a3ee-8b4882a74a22-kube-api-access-c6p6n\") pod \"calico-apiserver-67db4dc4b5-8mhgz\" (UID: \"ab23ab24-7e12-4864-a3ee-8b4882a74a22\") " pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" Jan 28 04:12:37.376287 kubelet[2950]: I0128 04:12:37.375976 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvmxv\" (UniqueName: \"kubernetes.io/projected/1c614f8b-7e15-4f62-a1d7-df2d998fe9fb-kube-api-access-gvmxv\") pod \"coredns-668d6bf9bc-sskjw\" (UID: \"1c614f8b-7e15-4f62-a1d7-df2d998fe9fb\") " pod="kube-system/coredns-668d6bf9bc-sskjw" Jan 28 04:12:37.377432 kubelet[2950]: I0128 04:12:37.376018 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5qmb\" (UniqueName: \"kubernetes.io/projected/128fdf3d-9296-4653-81bd-f8134dc33789-kube-api-access-r5qmb\") pod \"coredns-668d6bf9bc-v2j5w\" (UID: \"128fdf3d-9296-4653-81bd-f8134dc33789\") " pod="kube-system/coredns-668d6bf9bc-v2j5w" Jan 28 04:12:37.377432 kubelet[2950]: I0128 04:12:37.376048 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a75b9d4b-fd28-4515-89fe-b1c194b4eb55-calico-apiserver-certs\") pod \"calico-apiserver-67db4dc4b5-cstcv\" (UID: \"a75b9d4b-fd28-4515-89fe-b1c194b4eb55\") " pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" Jan 28 04:12:37.377432 kubelet[2950]: I0128 04:12:37.376082 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27w2\" (UniqueName: \"kubernetes.io/projected/f4a30036-8006-4d4f-855c-5cae3c37a049-kube-api-access-j27w2\") pod \"whisker-8595fc9944-khdch\" (UID: \"f4a30036-8006-4d4f-855c-5cae3c37a049\") " pod="calico-system/whisker-8595fc9944-khdch" Jan 28 04:12:37.377432 kubelet[2950]: I0128 04:12:37.376125 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/128fdf3d-9296-4653-81bd-f8134dc33789-config-volume\") pod \"coredns-668d6bf9bc-v2j5w\" (UID: \"128fdf3d-9296-4653-81bd-f8134dc33789\") " pod="kube-system/coredns-668d6bf9bc-v2j5w" Jan 28 04:12:37.377432 kubelet[2950]: I0128 04:12:37.376157 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4a30036-8006-4d4f-855c-5cae3c37a049-whisker-ca-bundle\") pod \"whisker-8595fc9944-khdch\" (UID: \"f4a30036-8006-4d4f-855c-5cae3c37a049\") " pod="calico-system/whisker-8595fc9944-khdch" Jan 28 04:12:37.573229 containerd[1648]: time="2026-01-28T04:12:37.572413646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v2j5w,Uid:128fdf3d-9296-4653-81bd-f8134dc33789,Namespace:kube-system,Attempt:0,}" Jan 28 04:12:37.608846 containerd[1648]: time="2026-01-28T04:12:37.608598445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5877564c64-ssm6r,Uid:53f99505-aca3-4278-8799-01f0eba5681f,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:37.612867 containerd[1648]: time="2026-01-28T04:12:37.612469696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-cstcv,Uid:a75b9d4b-fd28-4515-89fe-b1c194b4eb55,Namespace:calico-apiserver,Attempt:0,}" Jan 28 04:12:37.631059 containerd[1648]: time="2026-01-28T04:12:37.630995488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sskjw,Uid:1c614f8b-7e15-4f62-a1d7-df2d998fe9fb,Namespace:kube-system,Attempt:0,}" Jan 28 04:12:37.650605 containerd[1648]: time="2026-01-28T04:12:37.650546328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8595fc9944-khdch,Uid:f4a30036-8006-4d4f-855c-5cae3c37a049,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:37.658159 containerd[1648]: time="2026-01-28T04:12:37.658073226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-8mhgz,Uid:ab23ab24-7e12-4864-a3ee-8b4882a74a22,Namespace:calico-apiserver,Attempt:0,}" Jan 28 04:12:37.669306 containerd[1648]: time="2026-01-28T04:12:37.668748974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xf9j7,Uid:d08f7533-ee5b-4a11-b707-6aef7c12a55d,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:37.977776 containerd[1648]: time="2026-01-28T04:12:37.976760290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 04:12:38.068215 containerd[1648]: time="2026-01-28T04:12:38.068159084Z" level=error msg="Failed to destroy network for sandbox \"8010053ed90c90ec88e21e4a2006f4d06c1846ec6dd93c1a2892b678243a3711\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.072715 systemd[1]: run-netns-cni\x2d03d5ba81\x2d24e5\x2d63c0\x2d688b\x2dd76a46f0abf3.mount: Deactivated successfully. Jan 28 04:12:38.104298 containerd[1648]: time="2026-01-28T04:12:38.078920185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8595fc9944-khdch,Uid:f4a30036-8006-4d4f-855c-5cae3c37a049,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8010053ed90c90ec88e21e4a2006f4d06c1846ec6dd93c1a2892b678243a3711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.108276 containerd[1648]: time="2026-01-28T04:12:38.082682866Z" level=error msg="Failed to destroy network for sandbox \"a9cd44f3dc00a3d22f92801f42dc290c09444dd31e4810483d7e0425612fd333\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.108684 containerd[1648]: time="2026-01-28T04:12:38.087420148Z" level=error msg="Failed to destroy network for sandbox \"cde1c335fe4ff9f5f8e04d8dae18148bae3d83217dffd8e0be05216858ea5820\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.109850 containerd[1648]: time="2026-01-28T04:12:38.095441349Z" level=error msg="Failed to destroy network for sandbox \"61f3371091b3ab833e23723c5752f533e626c04baf0054b2461e4f369b338465\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.111302 containerd[1648]: time="2026-01-28T04:12:38.110091585Z" level=error msg="Failed to destroy network for sandbox \"9eba42581ceefc746c559f11ef2a11423d1124199a63f2f5357bf561eefadd5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.111214 systemd[1]: run-netns-cni\x2d382f43dc\x2d9a1a\x2da974\x2d2d7d\x2d39bf7689ff5c.mount: Deactivated successfully. Jan 28 04:12:38.116650 containerd[1648]: time="2026-01-28T04:12:38.116366302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5877564c64-ssm6r,Uid:53f99505-aca3-4278-8799-01f0eba5681f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9cd44f3dc00a3d22f92801f42dc290c09444dd31e4810483d7e0425612fd333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.117642 systemd[1]: run-netns-cni\x2daf48e4c7\x2db4f6\x2d5964\x2db370\x2d4665cde0ad3d.mount: Deactivated successfully. Jan 28 04:12:38.118131 systemd[1]: run-netns-cni\x2d2b7688c5\x2dd49e\x2dfccf\x2db98a\x2d897c6095eb6e.mount: Deactivated successfully. Jan 28 04:12:38.118234 systemd[1]: run-netns-cni\x2d7cea8fb1\x2d242b\x2d953e\x2d65a8\x2dea45fa4f3c4e.mount: Deactivated successfully. Jan 28 04:12:38.125130 containerd[1648]: time="2026-01-28T04:12:38.124523417Z" level=error msg="Failed to destroy network for sandbox \"cf201e5cc28c399e336001b7ad8be6085f8afd07fa8acac1d9ef63ae909e47a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.126610 kubelet[2950]: E0128 04:12:38.125457 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9cd44f3dc00a3d22f92801f42dc290c09444dd31e4810483d7e0425612fd333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.126610 kubelet[2950]: E0128 04:12:38.125457 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8010053ed90c90ec88e21e4a2006f4d06c1846ec6dd93c1a2892b678243a3711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.126610 kubelet[2950]: E0128 04:12:38.125608 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9cd44f3dc00a3d22f92801f42dc290c09444dd31e4810483d7e0425612fd333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" Jan 28 04:12:38.126610 kubelet[2950]: E0128 04:12:38.125615 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8010053ed90c90ec88e21e4a2006f4d06c1846ec6dd93c1a2892b678243a3711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8595fc9944-khdch" Jan 28 04:12:38.129958 kubelet[2950]: E0128 04:12:38.125659 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9cd44f3dc00a3d22f92801f42dc290c09444dd31e4810483d7e0425612fd333\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" Jan 28 04:12:38.129958 kubelet[2950]: E0128 04:12:38.125684 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8010053ed90c90ec88e21e4a2006f4d06c1846ec6dd93c1a2892b678243a3711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8595fc9944-khdch" Jan 28 04:12:38.129958 kubelet[2950]: E0128 04:12:38.125757 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8595fc9944-khdch_calico-system(f4a30036-8006-4d4f-855c-5cae3c37a049)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8595fc9944-khdch_calico-system(f4a30036-8006-4d4f-855c-5cae3c37a049)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8010053ed90c90ec88e21e4a2006f4d06c1846ec6dd93c1a2892b678243a3711\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8595fc9944-khdch" podUID="f4a30036-8006-4d4f-855c-5cae3c37a049" Jan 28 04:12:38.129803 systemd[1]: run-netns-cni\x2d6c34b576\x2df179\x2d8aa1\x2d2200\x2df17ab3781166.mount: Deactivated successfully. Jan 28 04:12:38.131965 containerd[1648]: time="2026-01-28T04:12:38.129411883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sskjw,Uid:1c614f8b-7e15-4f62-a1d7-df2d998fe9fb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde1c335fe4ff9f5f8e04d8dae18148bae3d83217dffd8e0be05216858ea5820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.132066 kubelet[2950]: E0128 04:12:38.125739 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5877564c64-ssm6r_calico-system(53f99505-aca3-4278-8799-01f0eba5681f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5877564c64-ssm6r_calico-system(53f99505-aca3-4278-8799-01f0eba5681f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9cd44f3dc00a3d22f92801f42dc290c09444dd31e4810483d7e0425612fd333\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:12:38.132066 kubelet[2950]: E0128 04:12:38.129663 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde1c335fe4ff9f5f8e04d8dae18148bae3d83217dffd8e0be05216858ea5820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.132066 kubelet[2950]: E0128 04:12:38.129801 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde1c335fe4ff9f5f8e04d8dae18148bae3d83217dffd8e0be05216858ea5820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sskjw" Jan 28 04:12:38.133454 kubelet[2950]: E0128 04:12:38.130182 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde1c335fe4ff9f5f8e04d8dae18148bae3d83217dffd8e0be05216858ea5820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sskjw" Jan 28 04:12:38.133454 kubelet[2950]: E0128 04:12:38.130295 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sskjw_kube-system(1c614f8b-7e15-4f62-a1d7-df2d998fe9fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sskjw_kube-system(1c614f8b-7e15-4f62-a1d7-df2d998fe9fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cde1c335fe4ff9f5f8e04d8dae18148bae3d83217dffd8e0be05216858ea5820\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sskjw" podUID="1c614f8b-7e15-4f62-a1d7-df2d998fe9fb" Jan 28 04:12:38.133581 containerd[1648]: time="2026-01-28T04:12:38.133025908Z" level=error msg="Failed to destroy network for sandbox \"38e39f2c3463d8dbfacba87b406b7c694ac7a8605b344e8f75478d1bc00af074\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.135844 containerd[1648]: time="2026-01-28T04:12:38.135746233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xf9j7,Uid:d08f7533-ee5b-4a11-b707-6aef7c12a55d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f3371091b3ab833e23723c5752f533e626c04baf0054b2461e4f369b338465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.136511 kubelet[2950]: E0128 04:12:38.136465 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f3371091b3ab833e23723c5752f533e626c04baf0054b2461e4f369b338465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.137049 kubelet[2950]: E0128 04:12:38.136672 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f3371091b3ab833e23723c5752f533e626c04baf0054b2461e4f369b338465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xf9j7" Jan 28 04:12:38.137138 kubelet[2950]: E0128 04:12:38.137050 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61f3371091b3ab833e23723c5752f533e626c04baf0054b2461e4f369b338465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xf9j7" Jan 28 04:12:38.137770 kubelet[2950]: E0128 04:12:38.137330 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xf9j7_calico-system(d08f7533-ee5b-4a11-b707-6aef7c12a55d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xf9j7_calico-system(d08f7533-ee5b-4a11-b707-6aef7c12a55d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61f3371091b3ab833e23723c5752f533e626c04baf0054b2461e4f369b338465\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:12:38.138059 containerd[1648]: time="2026-01-28T04:12:38.137946144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-cstcv,Uid:a75b9d4b-fd28-4515-89fe-b1c194b4eb55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eba42581ceefc746c559f11ef2a11423d1124199a63f2f5357bf561eefadd5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.139505 kubelet[2950]: E0128 04:12:38.139449 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eba42581ceefc746c559f11ef2a11423d1124199a63f2f5357bf561eefadd5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.139585 kubelet[2950]: E0128 04:12:38.139531 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eba42581ceefc746c559f11ef2a11423d1124199a63f2f5357bf561eefadd5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" Jan 28 04:12:38.139646 kubelet[2950]: E0128 04:12:38.139567 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eba42581ceefc746c559f11ef2a11423d1124199a63f2f5357bf561eefadd5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" Jan 28 04:12:38.140056 kubelet[2950]: E0128 04:12:38.139769 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67db4dc4b5-cstcv_calico-apiserver(a75b9d4b-fd28-4515-89fe-b1c194b4eb55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67db4dc4b5-cstcv_calico-apiserver(a75b9d4b-fd28-4515-89fe-b1c194b4eb55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eba42581ceefc746c559f11ef2a11423d1124199a63f2f5357bf561eefadd5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:12:38.140284 containerd[1648]: time="2026-01-28T04:12:38.140224486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v2j5w,Uid:128fdf3d-9296-4653-81bd-f8134dc33789,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf201e5cc28c399e336001b7ad8be6085f8afd07fa8acac1d9ef63ae909e47a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.140899 kubelet[2950]: E0128 04:12:38.140863 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf201e5cc28c399e336001b7ad8be6085f8afd07fa8acac1d9ef63ae909e47a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.141124 kubelet[2950]: E0128 04:12:38.141082 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf201e5cc28c399e336001b7ad8be6085f8afd07fa8acac1d9ef63ae909e47a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v2j5w" Jan 28 04:12:38.141190 kubelet[2950]: E0128 04:12:38.141128 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf201e5cc28c399e336001b7ad8be6085f8afd07fa8acac1d9ef63ae909e47a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v2j5w" Jan 28 04:12:38.141234 kubelet[2950]: E0128 04:12:38.141191 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v2j5w_kube-system(128fdf3d-9296-4653-81bd-f8134dc33789)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v2j5w_kube-system(128fdf3d-9296-4653-81bd-f8134dc33789)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf201e5cc28c399e336001b7ad8be6085f8afd07fa8acac1d9ef63ae909e47a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v2j5w" podUID="128fdf3d-9296-4653-81bd-f8134dc33789" Jan 28 04:12:38.141877 kubelet[2950]: E0128 04:12:38.141466 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e39f2c3463d8dbfacba87b406b7c694ac7a8605b344e8f75478d1bc00af074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.141877 kubelet[2950]: E0128 04:12:38.141506 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e39f2c3463d8dbfacba87b406b7c694ac7a8605b344e8f75478d1bc00af074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" Jan 28 04:12:38.141877 kubelet[2950]: E0128 04:12:38.141577 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e39f2c3463d8dbfacba87b406b7c694ac7a8605b344e8f75478d1bc00af074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" Jan 28 04:12:38.142208 containerd[1648]: time="2026-01-28T04:12:38.141011144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-8mhgz,Uid:ab23ab24-7e12-4864-a3ee-8b4882a74a22,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e39f2c3463d8dbfacba87b406b7c694ac7a8605b344e8f75478d1bc00af074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.142341 kubelet[2950]: E0128 04:12:38.141673 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67db4dc4b5-8mhgz_calico-apiserver(ab23ab24-7e12-4864-a3ee-8b4882a74a22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67db4dc4b5-8mhgz_calico-apiserver(ab23ab24-7e12-4864-a3ee-8b4882a74a22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38e39f2c3463d8dbfacba87b406b7c694ac7a8605b344e8f75478d1bc00af074\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:12:38.623807 systemd[1]: Created slice kubepods-besteffort-podc2a88baa_8755_4a0f_b81e_f2ef466fcd2d.slice - libcontainer container kubepods-besteffort-podc2a88baa_8755_4a0f_b81e_f2ef466fcd2d.slice. Jan 28 04:12:38.627795 containerd[1648]: time="2026-01-28T04:12:38.627747242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnlhb,Uid:c2a88baa-8755-4a0f-b81e-f2ef466fcd2d,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:38.722951 containerd[1648]: time="2026-01-28T04:12:38.722777593Z" level=error msg="Failed to destroy network for sandbox \"c78a82b50bce794633a25bbe87ee769730141038e1a0e4ea2b3adf9dff994895\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.725477 containerd[1648]: time="2026-01-28T04:12:38.725362168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnlhb,Uid:c2a88baa-8755-4a0f-b81e-f2ef466fcd2d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c78a82b50bce794633a25bbe87ee769730141038e1a0e4ea2b3adf9dff994895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.726313 kubelet[2950]: E0128 04:12:38.725882 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c78a82b50bce794633a25bbe87ee769730141038e1a0e4ea2b3adf9dff994895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:38.726313 kubelet[2950]: E0128 04:12:38.726044 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c78a82b50bce794633a25bbe87ee769730141038e1a0e4ea2b3adf9dff994895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:38.726313 kubelet[2950]: E0128 04:12:38.726079 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c78a82b50bce794633a25bbe87ee769730141038e1a0e4ea2b3adf9dff994895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:38.726563 kubelet[2950]: E0128 04:12:38.726185 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c78a82b50bce794633a25bbe87ee769730141038e1a0e4ea2b3adf9dff994895\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:39.038076 systemd[1]: run-netns-cni\x2dcf45bf05\x2d175b\x2dfeba\x2d1ad9\x2da51fe1241080.mount: Deactivated successfully. Jan 28 04:12:45.059685 kubelet[2950]: I0128 04:12:45.059593 2950 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 04:12:45.224000 audit[3942]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:45.232712 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 04:12:45.232828 kernel: audit: type=1325 audit(1769573565.224:585): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:45.224000 audit[3942]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd1f317840 a2=0 a3=7ffd1f31782c items=0 ppid=3100 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:45.245434 kernel: audit: type=1300 audit(1769573565.224:585): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd1f317840 a2=0 a3=7ffd1f31782c items=0 ppid=3100 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:45.245618 kernel: audit: type=1327 audit(1769573565.224:585): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:45.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:45.237000 audit[3942]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:45.255398 kernel: audit: type=1325 audit(1769573565.237:586): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:45.255467 kernel: audit: type=1300 audit(1769573565.237:586): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd1f317840 a2=0 a3=7ffd1f31782c items=0 ppid=3100 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:45.237000 audit[3942]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd1f317840 a2=0 a3=7ffd1f31782c items=0 ppid=3100 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:45.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:45.258778 kernel: audit: type=1327 audit(1769573565.237:586): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:49.614685 containerd[1648]: time="2026-01-28T04:12:49.614616542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8595fc9944-khdch,Uid:f4a30036-8006-4d4f-855c-5cae3c37a049,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:49.625059 containerd[1648]: time="2026-01-28T04:12:49.623855278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-cstcv,Uid:a75b9d4b-fd28-4515-89fe-b1c194b4eb55,Namespace:calico-apiserver,Attempt:0,}" Jan 28 04:12:49.625855 containerd[1648]: time="2026-01-28T04:12:49.625788010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v2j5w,Uid:128fdf3d-9296-4653-81bd-f8134dc33789,Namespace:kube-system,Attempt:0,}" Jan 28 04:12:49.880707 containerd[1648]: time="2026-01-28T04:12:49.880502488Z" level=error msg="Failed to destroy network for sandbox \"4456d597072c390c56d378289ef6701ea4360fdee53c72d125a44ab9764119a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:49.885627 systemd[1]: run-netns-cni\x2dcd6fd101\x2dd224\x2d3b53\x2d4797\x2d706f7c6d94ca.mount: Deactivated successfully. Jan 28 04:12:49.893923 containerd[1648]: time="2026-01-28T04:12:49.893542879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v2j5w,Uid:128fdf3d-9296-4653-81bd-f8134dc33789,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4456d597072c390c56d378289ef6701ea4360fdee53c72d125a44ab9764119a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:49.896277 kubelet[2950]: E0128 04:12:49.894669 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4456d597072c390c56d378289ef6701ea4360fdee53c72d125a44ab9764119a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:49.896277 kubelet[2950]: E0128 04:12:49.894800 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4456d597072c390c56d378289ef6701ea4360fdee53c72d125a44ab9764119a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v2j5w" Jan 28 04:12:49.896277 kubelet[2950]: E0128 04:12:49.894836 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4456d597072c390c56d378289ef6701ea4360fdee53c72d125a44ab9764119a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v2j5w" Jan 28 04:12:49.896901 kubelet[2950]: E0128 04:12:49.894915 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v2j5w_kube-system(128fdf3d-9296-4653-81bd-f8134dc33789)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v2j5w_kube-system(128fdf3d-9296-4653-81bd-f8134dc33789)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4456d597072c390c56d378289ef6701ea4360fdee53c72d125a44ab9764119a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v2j5w" podUID="128fdf3d-9296-4653-81bd-f8134dc33789" Jan 28 04:12:49.902740 containerd[1648]: time="2026-01-28T04:12:49.902582365Z" level=error msg="Failed to destroy network for sandbox \"eb75076be4283fa69ff83784b69ca5f73906494fc06be2789f6ad5a8b8193a2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:49.908005 systemd[1]: run-netns-cni\x2dd5a713b4\x2d6721\x2d3459\x2d47fb\x2d36ebd0f530da.mount: Deactivated successfully. Jan 28 04:12:49.949778 containerd[1648]: time="2026-01-28T04:12:49.949718545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8595fc9944-khdch,Uid:f4a30036-8006-4d4f-855c-5cae3c37a049,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb75076be4283fa69ff83784b69ca5f73906494fc06be2789f6ad5a8b8193a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:49.950670 kubelet[2950]: E0128 04:12:49.950599 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb75076be4283fa69ff83784b69ca5f73906494fc06be2789f6ad5a8b8193a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:49.950949 kubelet[2950]: E0128 04:12:49.950903 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb75076be4283fa69ff83784b69ca5f73906494fc06be2789f6ad5a8b8193a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8595fc9944-khdch" Jan 28 04:12:49.951142 kubelet[2950]: E0128 04:12:49.951109 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb75076be4283fa69ff83784b69ca5f73906494fc06be2789f6ad5a8b8193a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8595fc9944-khdch" Jan 28 04:12:49.951351 kubelet[2950]: E0128 04:12:49.951300 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8595fc9944-khdch_calico-system(f4a30036-8006-4d4f-855c-5cae3c37a049)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8595fc9944-khdch_calico-system(f4a30036-8006-4d4f-855c-5cae3c37a049)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb75076be4283fa69ff83784b69ca5f73906494fc06be2789f6ad5a8b8193a2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8595fc9944-khdch" podUID="f4a30036-8006-4d4f-855c-5cae3c37a049" Jan 28 04:12:50.060405 containerd[1648]: time="2026-01-28T04:12:50.060348623Z" level=error msg="Failed to destroy network for sandbox \"cc4fdf8da7ebee88acb64e35b839b4a9675a358d2666128658dea6cda1c31a25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.064971 containerd[1648]: time="2026-01-28T04:12:50.064904965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-cstcv,Uid:a75b9d4b-fd28-4515-89fe-b1c194b4eb55,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4fdf8da7ebee88acb64e35b839b4a9675a358d2666128658dea6cda1c31a25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.066616 kubelet[2950]: E0128 04:12:50.066104 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4fdf8da7ebee88acb64e35b839b4a9675a358d2666128658dea6cda1c31a25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.066842 kubelet[2950]: E0128 04:12:50.066689 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4fdf8da7ebee88acb64e35b839b4a9675a358d2666128658dea6cda1c31a25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" Jan 28 04:12:50.066920 kubelet[2950]: E0128 04:12:50.066751 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc4fdf8da7ebee88acb64e35b839b4a9675a358d2666128658dea6cda1c31a25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" Jan 28 04:12:50.067517 kubelet[2950]: E0128 04:12:50.067050 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67db4dc4b5-cstcv_calico-apiserver(a75b9d4b-fd28-4515-89fe-b1c194b4eb55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67db4dc4b5-cstcv_calico-apiserver(a75b9d4b-fd28-4515-89fe-b1c194b4eb55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc4fdf8da7ebee88acb64e35b839b4a9675a358d2666128658dea6cda1c31a25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:12:50.618174 containerd[1648]: time="2026-01-28T04:12:50.617984398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xf9j7,Uid:d08f7533-ee5b-4a11-b707-6aef7c12a55d,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:50.623202 systemd[1]: run-netns-cni\x2d2d38a237\x2d8186\x2d653e\x2d03ba\x2dd798cf0fc1bf.mount: Deactivated successfully. Jan 28 04:12:50.629597 containerd[1648]: time="2026-01-28T04:12:50.629551946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sskjw,Uid:1c614f8b-7e15-4f62-a1d7-df2d998fe9fb,Namespace:kube-system,Attempt:0,}" Jan 28 04:12:50.847289 containerd[1648]: time="2026-01-28T04:12:50.845312419Z" level=error msg="Failed to destroy network for sandbox \"eea9f8784e87884ad973beee28eda472c044262f2c9850704af49d95db58a2b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.849486 systemd[1]: run-netns-cni\x2d78d4f94c\x2d87aa\x2d8784\x2d81dd\x2d434185118085.mount: Deactivated successfully. Jan 28 04:12:50.894993 containerd[1648]: time="2026-01-28T04:12:50.893962588Z" level=error msg="Failed to destroy network for sandbox \"ea475c2f7bab23248488f36010df5c07d7be0fe0030c43f0d3ae4a4614e48aab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.901697 systemd[1]: run-netns-cni\x2dcb1e4f9b\x2de920\x2dd41b\x2dfdc1\x2d5367d60be80a.mount: Deactivated successfully. Jan 28 04:12:50.975281 containerd[1648]: time="2026-01-28T04:12:50.975094054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sskjw,Uid:1c614f8b-7e15-4f62-a1d7-df2d998fe9fb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea9f8784e87884ad973beee28eda472c044262f2c9850704af49d95db58a2b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.978091 kubelet[2950]: E0128 04:12:50.977990 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea9f8784e87884ad973beee28eda472c044262f2c9850704af49d95db58a2b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.978838 kubelet[2950]: E0128 04:12:50.978125 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea9f8784e87884ad973beee28eda472c044262f2c9850704af49d95db58a2b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sskjw" Jan 28 04:12:50.978838 kubelet[2950]: E0128 04:12:50.978159 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea9f8784e87884ad973beee28eda472c044262f2c9850704af49d95db58a2b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sskjw" Jan 28 04:12:50.978838 kubelet[2950]: E0128 04:12:50.978240 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sskjw_kube-system(1c614f8b-7e15-4f62-a1d7-df2d998fe9fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sskjw_kube-system(1c614f8b-7e15-4f62-a1d7-df2d998fe9fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eea9f8784e87884ad973beee28eda472c044262f2c9850704af49d95db58a2b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sskjw" podUID="1c614f8b-7e15-4f62-a1d7-df2d998fe9fb" Jan 28 04:12:50.980581 containerd[1648]: time="2026-01-28T04:12:50.980538153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xf9j7,Uid:d08f7533-ee5b-4a11-b707-6aef7c12a55d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea475c2f7bab23248488f36010df5c07d7be0fe0030c43f0d3ae4a4614e48aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.981944 kubelet[2950]: E0128 04:12:50.981557 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea475c2f7bab23248488f36010df5c07d7be0fe0030c43f0d3ae4a4614e48aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:50.981944 kubelet[2950]: E0128 04:12:50.981606 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea475c2f7bab23248488f36010df5c07d7be0fe0030c43f0d3ae4a4614e48aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xf9j7" Jan 28 04:12:50.981944 kubelet[2950]: E0128 04:12:50.981643 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea475c2f7bab23248488f36010df5c07d7be0fe0030c43f0d3ae4a4614e48aab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xf9j7" Jan 28 04:12:50.982118 kubelet[2950]: E0128 04:12:50.981697 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xf9j7_calico-system(d08f7533-ee5b-4a11-b707-6aef7c12a55d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xf9j7_calico-system(d08f7533-ee5b-4a11-b707-6aef7c12a55d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea475c2f7bab23248488f36010df5c07d7be0fe0030c43f0d3ae4a4614e48aab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:12:51.614768 containerd[1648]: time="2026-01-28T04:12:51.614592231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-8mhgz,Uid:ab23ab24-7e12-4864-a3ee-8b4882a74a22,Namespace:calico-apiserver,Attempt:0,}" Jan 28 04:12:51.631612 containerd[1648]: time="2026-01-28T04:12:51.631410293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5877564c64-ssm6r,Uid:53f99505-aca3-4278-8799-01f0eba5681f,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:51.826951 containerd[1648]: time="2026-01-28T04:12:51.826874446Z" level=error msg="Failed to destroy network for sandbox \"b3f5151571eb0ced00395d4350dc77f132da0c0708ed0a4c5319ae1e41eecb7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:51.833166 systemd[1]: run-netns-cni\x2d95f0393d\x2de8b8\x2d89b9\x2d50cd\x2d2ce1214a15eb.mount: Deactivated successfully. Jan 28 04:12:51.837280 containerd[1648]: time="2026-01-28T04:12:51.834816288Z" level=error msg="Failed to destroy network for sandbox \"af4802c3f6182aad13f48aab0ff0e7167e712707d9d6f19deaab56ce4db59836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:51.838602 systemd[1]: run-netns-cni\x2d59808892\x2d5ccc\x2d97f0\x2d20b5\x2d252723dd93c5.mount: Deactivated successfully. Jan 28 04:12:51.841863 containerd[1648]: time="2026-01-28T04:12:51.841617073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-8mhgz,Uid:ab23ab24-7e12-4864-a3ee-8b4882a74a22,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3f5151571eb0ced00395d4350dc77f132da0c0708ed0a4c5319ae1e41eecb7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:51.848346 kubelet[2950]: E0128 04:12:51.847935 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3f5151571eb0ced00395d4350dc77f132da0c0708ed0a4c5319ae1e41eecb7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:51.848565 kubelet[2950]: E0128 04:12:51.848421 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3f5151571eb0ced00395d4350dc77f132da0c0708ed0a4c5319ae1e41eecb7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" Jan 28 04:12:51.848565 kubelet[2950]: E0128 04:12:51.848483 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3f5151571eb0ced00395d4350dc77f132da0c0708ed0a4c5319ae1e41eecb7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" Jan 28 04:12:51.848752 kubelet[2950]: E0128 04:12:51.848588 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67db4dc4b5-8mhgz_calico-apiserver(ab23ab24-7e12-4864-a3ee-8b4882a74a22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67db4dc4b5-8mhgz_calico-apiserver(ab23ab24-7e12-4864-a3ee-8b4882a74a22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3f5151571eb0ced00395d4350dc77f132da0c0708ed0a4c5319ae1e41eecb7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:12:51.849464 containerd[1648]: time="2026-01-28T04:12:51.848976214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5877564c64-ssm6r,Uid:53f99505-aca3-4278-8799-01f0eba5681f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4802c3f6182aad13f48aab0ff0e7167e712707d9d6f19deaab56ce4db59836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:51.849583 kubelet[2950]: E0128 04:12:51.849178 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4802c3f6182aad13f48aab0ff0e7167e712707d9d6f19deaab56ce4db59836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:51.849583 kubelet[2950]: E0128 04:12:51.849242 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4802c3f6182aad13f48aab0ff0e7167e712707d9d6f19deaab56ce4db59836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" Jan 28 04:12:51.849583 kubelet[2950]: E0128 04:12:51.849286 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4802c3f6182aad13f48aab0ff0e7167e712707d9d6f19deaab56ce4db59836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" Jan 28 04:12:51.850086 kubelet[2950]: E0128 04:12:51.849371 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5877564c64-ssm6r_calico-system(53f99505-aca3-4278-8799-01f0eba5681f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5877564c64-ssm6r_calico-system(53f99505-aca3-4278-8799-01f0eba5681f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af4802c3f6182aad13f48aab0ff0e7167e712707d9d6f19deaab56ce4db59836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:12:51.920826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199851626.mount: Deactivated successfully. Jan 28 04:12:52.016629 containerd[1648]: time="2026-01-28T04:12:52.016544832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:52.034455 containerd[1648]: time="2026-01-28T04:12:52.033706644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 04:12:52.057983 containerd[1648]: time="2026-01-28T04:12:52.057920093Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:52.061541 containerd[1648]: time="2026-01-28T04:12:52.061467504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 04:12:52.062433 containerd[1648]: time="2026-01-28T04:12:52.062394076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 14.084902383s" Jan 28 04:12:52.069444 containerd[1648]: time="2026-01-28T04:12:52.069302262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 04:12:52.120068 containerd[1648]: time="2026-01-28T04:12:52.120014193Z" level=info msg="CreateContainer within sandbox \"61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 04:12:52.208728 containerd[1648]: time="2026-01-28T04:12:52.208438199Z" level=info msg="Container 5f09734500191ce55eef57819d522851fc9584d9bae8945bd0a927eeb0a98926: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:12:52.259710 containerd[1648]: time="2026-01-28T04:12:52.259608686Z" level=info msg="CreateContainer within sandbox \"61e1a3d6c4a5a167780cddd33ea466d7b2595d214f06c6cae8ce46aba4b9ed88\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5f09734500191ce55eef57819d522851fc9584d9bae8945bd0a927eeb0a98926\"" Jan 28 04:12:52.261401 containerd[1648]: time="2026-01-28T04:12:52.261182108Z" level=info msg="StartContainer for \"5f09734500191ce55eef57819d522851fc9584d9bae8945bd0a927eeb0a98926\"" Jan 28 04:12:52.266623 containerd[1648]: time="2026-01-28T04:12:52.266580092Z" level=info msg="connecting to shim 5f09734500191ce55eef57819d522851fc9584d9bae8945bd0a927eeb0a98926" address="unix:///run/containerd/s/9be2319cd126e54d4f09d6b931ceae104975c48ba3bb205fc59bc15b45e7c81a" protocol=ttrpc version=3 Jan 28 04:12:52.415552 systemd[1]: Started cri-containerd-5f09734500191ce55eef57819d522851fc9584d9bae8945bd0a927eeb0a98926.scope - libcontainer container 5f09734500191ce55eef57819d522851fc9584d9bae8945bd0a927eeb0a98926. Jan 28 04:12:52.498000 audit: BPF prog-id=181 op=LOAD Jan 28 04:12:52.505143 kernel: audit: type=1334 audit(1769573572.498:587): prog-id=181 op=LOAD Jan 28 04:12:52.505270 kernel: audit: type=1300 audit(1769573572.498:587): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000214488 a2=98 a3=0 items=0 ppid=3509 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:52.498000 audit[4131]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000214488 a2=98 a3=0 items=0 ppid=3509 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:52.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303937333435303031393163653535656566353738313964353232 Jan 28 04:12:52.516323 kernel: audit: type=1327 audit(1769573572.498:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303937333435303031393163653535656566353738313964353232 Jan 28 04:12:52.498000 audit: BPF prog-id=182 op=LOAD Jan 28 04:12:52.498000 audit[4131]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000214218 a2=98 a3=0 items=0 ppid=3509 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:52.519595 kernel: audit: type=1334 audit(1769573572.498:588): prog-id=182 op=LOAD Jan 28 04:12:52.519661 kernel: audit: type=1300 audit(1769573572.498:588): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000214218 a2=98 a3=0 items=0 ppid=3509 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:52.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303937333435303031393163653535656566353738313964353232 Jan 28 04:12:52.524721 kernel: audit: type=1327 audit(1769573572.498:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303937333435303031393163653535656566353738313964353232 Jan 28 04:12:52.498000 audit: BPF prog-id=182 op=UNLOAD Jan 28 04:12:52.528826 kernel: audit: type=1334 audit(1769573572.498:589): prog-id=182 op=UNLOAD Jan 28 04:12:52.498000 audit[4131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:52.531860 kernel: audit: type=1300 audit(1769573572.498:589): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:52.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303937333435303031393163653535656566353738313964353232 Jan 28 04:12:52.541199 kernel: audit: type=1327 audit(1769573572.498:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303937333435303031393163653535656566353738313964353232 Jan 28 04:12:52.498000 audit: BPF prog-id=181 op=UNLOAD Jan 28 04:12:52.498000 audit[4131]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3509 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:52.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303937333435303031393163653535656566353738313964353232 Jan 28 04:12:52.498000 audit: BPF prog-id=183 op=LOAD Jan 28 04:12:52.498000 audit[4131]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002146e8 a2=98 a3=0 items=0 ppid=3509 pid=4131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:52.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303937333435303031393163653535656566353738313964353232 Jan 28 04:12:52.551435 kernel: audit: type=1334 audit(1769573572.498:590): prog-id=181 op=UNLOAD Jan 28 04:12:52.599059 containerd[1648]: time="2026-01-28T04:12:52.598985828Z" level=info msg="StartContainer for \"5f09734500191ce55eef57819d522851fc9584d9bae8945bd0a927eeb0a98926\" returns successfully" Jan 28 04:12:52.615814 containerd[1648]: time="2026-01-28T04:12:52.615530027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnlhb,Uid:c2a88baa-8755-4a0f-b81e-f2ef466fcd2d,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:52.732585 containerd[1648]: time="2026-01-28T04:12:52.732518104Z" level=error msg="Failed to destroy network for sandbox \"e0ecad9ffcac2c4cbeaa8953d396707d7085fc818ce2485f2059ce05366ac45b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:52.736622 systemd[1]: run-netns-cni\x2d22c3d642\x2d1b23\x2ddb3f\x2db7c2\x2d82756d481d26.mount: Deactivated successfully. Jan 28 04:12:52.738826 containerd[1648]: time="2026-01-28T04:12:52.738431603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnlhb,Uid:c2a88baa-8755-4a0f-b81e-f2ef466fcd2d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0ecad9ffcac2c4cbeaa8953d396707d7085fc818ce2485f2059ce05366ac45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:52.739808 kubelet[2950]: E0128 04:12:52.739747 2950 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0ecad9ffcac2c4cbeaa8953d396707d7085fc818ce2485f2059ce05366ac45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 04:12:52.740601 kubelet[2950]: E0128 04:12:52.739849 2950 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0ecad9ffcac2c4cbeaa8953d396707d7085fc818ce2485f2059ce05366ac45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:52.740601 kubelet[2950]: E0128 04:12:52.739901 2950 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0ecad9ffcac2c4cbeaa8953d396707d7085fc818ce2485f2059ce05366ac45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bnlhb" Jan 28 04:12:52.740601 kubelet[2950]: E0128 04:12:52.740005 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0ecad9ffcac2c4cbeaa8953d396707d7085fc818ce2485f2059ce05366ac45b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:12:53.062860 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 04:12:53.063871 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 04:12:53.281615 kubelet[2950]: I0128 04:12:53.281489 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zh7sc" podStartSLOduration=2.594086387 podStartE2EDuration="33.274116285s" podCreationTimestamp="2026-01-28 04:12:20 +0000 UTC" firstStartedPulling="2026-01-28 04:12:21.390442312 +0000 UTC m=+27.002396083" lastFinishedPulling="2026-01-28 04:12:52.070472211 +0000 UTC m=+57.682425981" observedRunningTime="2026-01-28 04:12:53.218548863 +0000 UTC m=+58.830502649" watchObservedRunningTime="2026-01-28 04:12:53.274116285 +0000 UTC m=+58.886070065" Jan 28 04:12:53.535018 kubelet[2950]: I0128 04:12:53.534799 2950 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4a30036-8006-4d4f-855c-5cae3c37a049-whisker-backend-key-pair\") pod \"f4a30036-8006-4d4f-855c-5cae3c37a049\" (UID: \"f4a30036-8006-4d4f-855c-5cae3c37a049\") " Jan 28 04:12:53.535018 kubelet[2950]: I0128 04:12:53.534884 2950 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27w2\" (UniqueName: \"kubernetes.io/projected/f4a30036-8006-4d4f-855c-5cae3c37a049-kube-api-access-j27w2\") pod \"f4a30036-8006-4d4f-855c-5cae3c37a049\" (UID: \"f4a30036-8006-4d4f-855c-5cae3c37a049\") " Jan 28 04:12:53.536276 kubelet[2950]: I0128 04:12:53.535882 2950 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4a30036-8006-4d4f-855c-5cae3c37a049-whisker-ca-bundle\") pod \"f4a30036-8006-4d4f-855c-5cae3c37a049\" (UID: \"f4a30036-8006-4d4f-855c-5cae3c37a049\") " Jan 28 04:12:53.537283 kubelet[2950]: I0128 04:12:53.536730 2950 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4a30036-8006-4d4f-855c-5cae3c37a049-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f4a30036-8006-4d4f-855c-5cae3c37a049" (UID: "f4a30036-8006-4d4f-855c-5cae3c37a049"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 04:12:53.547363 kubelet[2950]: I0128 04:12:53.545060 2950 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a30036-8006-4d4f-855c-5cae3c37a049-kube-api-access-j27w2" (OuterVolumeSpecName: "kube-api-access-j27w2") pod "f4a30036-8006-4d4f-855c-5cae3c37a049" (UID: "f4a30036-8006-4d4f-855c-5cae3c37a049"). InnerVolumeSpecName "kube-api-access-j27w2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 04:12:53.546125 systemd[1]: var-lib-kubelet-pods-f4a30036\x2d8006\x2d4d4f\x2d855c\x2d5cae3c37a049-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj27w2.mount: Deactivated successfully. Jan 28 04:12:53.554905 kubelet[2950]: I0128 04:12:53.554836 2950 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a30036-8006-4d4f-855c-5cae3c37a049-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f4a30036-8006-4d4f-855c-5cae3c37a049" (UID: "f4a30036-8006-4d4f-855c-5cae3c37a049"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 04:12:53.555754 systemd[1]: var-lib-kubelet-pods-f4a30036\x2d8006\x2d4d4f\x2d855c\x2d5cae3c37a049-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 04:12:53.636851 kubelet[2950]: I0128 04:12:53.636734 2950 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4a30036-8006-4d4f-855c-5cae3c37a049-whisker-ca-bundle\") on node \"srv-3avyi.gb1.brightbox.com\" DevicePath \"\"" Jan 28 04:12:53.636851 kubelet[2950]: I0128 04:12:53.636785 2950 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j27w2\" (UniqueName: \"kubernetes.io/projected/f4a30036-8006-4d4f-855c-5cae3c37a049-kube-api-access-j27w2\") on node \"srv-3avyi.gb1.brightbox.com\" DevicePath \"\"" Jan 28 04:12:53.636851 kubelet[2950]: I0128 04:12:53.636805 2950 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4a30036-8006-4d4f-855c-5cae3c37a049-whisker-backend-key-pair\") on node \"srv-3avyi.gb1.brightbox.com\" DevicePath \"\"" Jan 28 04:12:54.073804 systemd[1]: Removed slice kubepods-besteffort-podf4a30036_8006_4d4f_855c_5cae3c37a049.slice - libcontainer container kubepods-besteffort-podf4a30036_8006_4d4f_855c_5cae3c37a049.slice. Jan 28 04:12:54.237854 systemd[1]: Created slice kubepods-besteffort-pod597d28d3_837d_4a2a_8aed_8b9a166157ec.slice - libcontainer container kubepods-besteffort-pod597d28d3_837d_4a2a_8aed_8b9a166157ec.slice. Jan 28 04:12:54.346517 kubelet[2950]: I0128 04:12:54.346358 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stnn\" (UniqueName: \"kubernetes.io/projected/597d28d3-837d-4a2a-8aed-8b9a166157ec-kube-api-access-6stnn\") pod \"whisker-557cdf66d6-zbq82\" (UID: \"597d28d3-837d-4a2a-8aed-8b9a166157ec\") " pod="calico-system/whisker-557cdf66d6-zbq82" Jan 28 04:12:54.346517 kubelet[2950]: I0128 04:12:54.346434 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/597d28d3-837d-4a2a-8aed-8b9a166157ec-whisker-backend-key-pair\") pod \"whisker-557cdf66d6-zbq82\" (UID: \"597d28d3-837d-4a2a-8aed-8b9a166157ec\") " pod="calico-system/whisker-557cdf66d6-zbq82" Jan 28 04:12:54.346517 kubelet[2950]: I0128 04:12:54.346482 2950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/597d28d3-837d-4a2a-8aed-8b9a166157ec-whisker-ca-bundle\") pod \"whisker-557cdf66d6-zbq82\" (UID: \"597d28d3-837d-4a2a-8aed-8b9a166157ec\") " pod="calico-system/whisker-557cdf66d6-zbq82" Jan 28 04:12:54.544862 containerd[1648]: time="2026-01-28T04:12:54.544798894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557cdf66d6-zbq82,Uid:597d28d3-837d-4a2a-8aed-8b9a166157ec,Namespace:calico-system,Attempt:0,}" Jan 28 04:12:54.618776 kubelet[2950]: I0128 04:12:54.618630 2950 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a30036-8006-4d4f-855c-5cae3c37a049" path="/var/lib/kubelet/pods/f4a30036-8006-4d4f-855c-5cae3c37a049/volumes" Jan 28 04:12:54.992129 systemd-networkd[1551]: calic101884c6d2: Link UP Jan 28 04:12:54.993652 systemd-networkd[1551]: calic101884c6d2: Gained carrier Jan 28 04:12:55.022706 containerd[1648]: 2026-01-28 04:12:54.589 [INFO][4273] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 04:12:55.022706 containerd[1648]: 2026-01-28 04:12:54.628 [INFO][4273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0 whisker-557cdf66d6- calico-system 597d28d3-837d-4a2a-8aed-8b9a166157ec 963 0 2026-01-28 04:12:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:557cdf66d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-3avyi.gb1.brightbox.com whisker-557cdf66d6-zbq82 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic101884c6d2 [] [] }} ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Namespace="calico-system" Pod="whisker-557cdf66d6-zbq82" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-" Jan 28 04:12:55.022706 containerd[1648]: 2026-01-28 04:12:54.628 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Namespace="calico-system" Pod="whisker-557cdf66d6-zbq82" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" Jan 28 04:12:55.022706 containerd[1648]: 2026-01-28 04:12:54.867 [INFO][4285] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" HandleID="k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Workload="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.870 [INFO][4285] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" HandleID="k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Workload="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3avyi.gb1.brightbox.com", "pod":"whisker-557cdf66d6-zbq82", "timestamp":"2026-01-28 04:12:54.867652723 +0000 UTC"}, Hostname:"srv-3avyi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.870 [INFO][4285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.871 [INFO][4285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.872 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3avyi.gb1.brightbox.com' Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.891 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.905 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.911 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.914 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.023067 containerd[1648]: 2026-01-28 04:12:54.917 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.027362 containerd[1648]: 2026-01-28 04:12:54.917 [INFO][4285] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.027362 containerd[1648]: 2026-01-28 04:12:54.919 [INFO][4285] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f Jan 28 04:12:55.027362 containerd[1648]: 2026-01-28 04:12:54.926 [INFO][4285] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.027362 containerd[1648]: 2026-01-28 04:12:54.941 [INFO][4285] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.65/26] block=192.168.63.64/26 handle="k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.027362 containerd[1648]: 2026-01-28 04:12:54.942 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.65/26] handle="k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:12:55.027362 containerd[1648]: 2026-01-28 04:12:54.942 [INFO][4285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 04:12:55.027362 containerd[1648]: 2026-01-28 04:12:54.942 [INFO][4285] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.65/26] IPv6=[] ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" HandleID="k8s-pod-network.8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Workload="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" Jan 28 04:12:55.027652 containerd[1648]: 2026-01-28 04:12:54.945 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Namespace="calico-system" Pod="whisker-557cdf66d6-zbq82" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0", GenerateName:"whisker-557cdf66d6-", Namespace:"calico-system", SelfLink:"", UID:"597d28d3-837d-4a2a-8aed-8b9a166157ec", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"557cdf66d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"", Pod:"whisker-557cdf66d6-zbq82", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic101884c6d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:12:55.027652 containerd[1648]: 2026-01-28 04:12:54.946 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.65/32] ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Namespace="calico-system" Pod="whisker-557cdf66d6-zbq82" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" Jan 28 04:12:55.027817 containerd[1648]: 2026-01-28 04:12:54.946 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic101884c6d2 ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Namespace="calico-system" Pod="whisker-557cdf66d6-zbq82" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" Jan 28 04:12:55.027817 containerd[1648]: 2026-01-28 04:12:54.991 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Namespace="calico-system" Pod="whisker-557cdf66d6-zbq82" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" Jan 28 04:12:55.027986 containerd[1648]: 2026-01-28 04:12:54.994 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Namespace="calico-system" Pod="whisker-557cdf66d6-zbq82" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0", GenerateName:"whisker-557cdf66d6-", Namespace:"calico-system", SelfLink:"", UID:"597d28d3-837d-4a2a-8aed-8b9a166157ec", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"557cdf66d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f", Pod:"whisker-557cdf66d6-zbq82", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic101884c6d2", MAC:"8a:4f:6e:3c:6b:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:12:55.028082 containerd[1648]: 2026-01-28 04:12:55.011 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" Namespace="calico-system" Pod="whisker-557cdf66d6-zbq82" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-whisker--557cdf66d6--zbq82-eth0" Jan 28 04:12:55.317667 containerd[1648]: time="2026-01-28T04:12:55.317512978Z" level=info msg="connecting to shim 8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f" address="unix:///run/containerd/s/ec4bc98db4083bdf2f79bff663b2acfdfcc6d045d685fcaeaec76dfa6cace41e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:12:55.430873 systemd[1]: Started cri-containerd-8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f.scope - libcontainer container 8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f. Jan 28 04:12:55.491000 audit: BPF prog-id=184 op=LOAD Jan 28 04:12:55.492000 audit: BPF prog-id=185 op=LOAD Jan 28 04:12:55.492000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4392 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862396465396639353433306536396137366430313035623562643763 Jan 28 04:12:55.493000 audit: BPF prog-id=185 op=UNLOAD Jan 28 04:12:55.493000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4392 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862396465396639353433306536396137366430313035623562643763 Jan 28 04:12:55.493000 audit: BPF prog-id=186 op=LOAD Jan 28 04:12:55.493000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4392 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862396465396639353433306536396137366430313035623562643763 Jan 28 04:12:55.493000 audit: BPF prog-id=187 op=LOAD Jan 28 04:12:55.493000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4392 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862396465396639353433306536396137366430313035623562643763 Jan 28 04:12:55.494000 audit: BPF prog-id=187 op=UNLOAD Jan 28 04:12:55.494000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4392 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862396465396639353433306536396137366430313035623562643763 Jan 28 04:12:55.494000 audit: BPF prog-id=186 op=UNLOAD Jan 28 04:12:55.494000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4392 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862396465396639353433306536396137366430313035623562643763 Jan 28 04:12:55.494000 audit: BPF prog-id=188 op=LOAD Jan 28 04:12:55.494000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4392 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862396465396639353433306536396137366430313035623562643763 Jan 28 04:12:55.646430 containerd[1648]: time="2026-01-28T04:12:55.645807146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557cdf66d6-zbq82,Uid:597d28d3-837d-4a2a-8aed-8b9a166157ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b9de9f95430e69a76d0105b5bd7c9fb8b8b8c1ede7b0cf79260243e1ccdbe2f\"" Jan 28 04:12:55.669115 containerd[1648]: time="2026-01-28T04:12:55.669051564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 04:12:55.869000 audit: BPF prog-id=189 op=LOAD Jan 28 04:12:55.869000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2461ab30 a2=98 a3=1fffffffffffffff items=0 ppid=4330 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.869000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 04:12:55.869000 audit: BPF prog-id=189 op=UNLOAD Jan 28 04:12:55.869000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc2461ab00 a3=0 items=0 ppid=4330 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.869000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 04:12:55.871000 audit: BPF prog-id=190 op=LOAD Jan 28 04:12:55.871000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2461aa10 a2=94 a3=3 items=0 ppid=4330 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.871000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 04:12:55.871000 audit: BPF prog-id=190 op=UNLOAD Jan 28 04:12:55.871000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc2461aa10 a2=94 a3=3 items=0 ppid=4330 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.871000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 04:12:55.871000 audit: BPF prog-id=191 op=LOAD Jan 28 04:12:55.871000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2461aa50 a2=94 a3=7ffc2461ac30 items=0 ppid=4330 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.871000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 04:12:55.871000 audit: BPF prog-id=191 op=UNLOAD Jan 28 04:12:55.871000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc2461aa50 a2=94 a3=7ffc2461ac30 items=0 ppid=4330 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.871000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 04:12:55.874000 audit: BPF prog-id=192 op=LOAD Jan 28 04:12:55.874000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde89fc330 a2=98 a3=3 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.874000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:55.874000 audit: BPF prog-id=192 op=UNLOAD Jan 28 04:12:55.874000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde89fc300 a3=0 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.874000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:55.875000 audit: BPF prog-id=193 op=LOAD Jan 28 04:12:55.875000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde89fc120 a2=94 a3=54428f items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.875000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:55.875000 audit: BPF prog-id=193 op=UNLOAD Jan 28 04:12:55.875000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde89fc120 a2=94 a3=54428f items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.875000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:55.875000 audit: BPF prog-id=194 op=LOAD Jan 28 04:12:55.875000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde89fc150 a2=94 a3=2 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.875000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:55.875000 audit: BPF prog-id=194 op=UNLOAD Jan 28 04:12:55.875000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde89fc150 a2=0 a3=2 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:55.875000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.008504 containerd[1648]: time="2026-01-28T04:12:56.008246314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:12:56.024660 containerd[1648]: time="2026-01-28T04:12:56.024401332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 04:12:56.024660 containerd[1648]: time="2026-01-28T04:12:56.024454206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 04:12:56.029681 kubelet[2950]: E0128 04:12:56.029541 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 04:12:56.036965 kubelet[2950]: E0128 04:12:56.036657 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 04:12:56.077665 kubelet[2950]: E0128 04:12:56.077577 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e1e1e2ca0574df88a00a87ecf91f97d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6stnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-557cdf66d6-zbq82_calico-system(597d28d3-837d-4a2a-8aed-8b9a166157ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 04:12:56.082280 containerd[1648]: time="2026-01-28T04:12:56.081347459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 04:12:56.156000 audit: BPF prog-id=195 op=LOAD Jan 28 04:12:56.156000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde89fc010 a2=94 a3=1 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.156000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.156000 audit: BPF prog-id=195 op=UNLOAD Jan 28 04:12:56.156000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde89fc010 a2=94 a3=1 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.156000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.170000 audit: BPF prog-id=196 op=LOAD Jan 28 04:12:56.170000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde89fc000 a2=94 a3=4 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.170000 audit: BPF prog-id=196 op=UNLOAD Jan 28 04:12:56.170000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde89fc000 a2=0 a3=4 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.171000 audit: BPF prog-id=197 op=LOAD Jan 28 04:12:56.171000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffde89fbe60 a2=94 a3=5 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.171000 audit: BPF prog-id=197 op=UNLOAD Jan 28 04:12:56.171000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffde89fbe60 a2=0 a3=5 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.171000 audit: BPF prog-id=198 op=LOAD Jan 28 04:12:56.171000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde89fc080 a2=94 a3=6 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.171000 audit: BPF prog-id=198 op=UNLOAD Jan 28 04:12:56.171000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde89fc080 a2=0 a3=6 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.171000 audit: BPF prog-id=199 op=LOAD Jan 28 04:12:56.171000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde89fb830 a2=94 a3=88 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.172000 audit: BPF prog-id=200 op=LOAD Jan 28 04:12:56.172000 audit[4467]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffde89fb6b0 a2=94 a3=2 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.172000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.172000 audit: BPF prog-id=200 op=UNLOAD Jan 28 04:12:56.172000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffde89fb6e0 a2=0 a3=7ffde89fb7e0 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.172000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.173000 audit: BPF prog-id=199 op=UNLOAD Jan 28 04:12:56.173000 audit[4467]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=130e1d10 a2=0 a3=e90b6f3d8287d6b3 items=0 ppid=4330 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.173000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 04:12:56.187000 audit: BPF prog-id=201 op=LOAD Jan 28 04:12:56.187000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb2248e90 a2=98 a3=1999999999999999 items=0 ppid=4330 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.187000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 04:12:56.187000 audit: BPF prog-id=201 op=UNLOAD Jan 28 04:12:56.187000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcb2248e60 a3=0 items=0 ppid=4330 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.187000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 04:12:56.187000 audit: BPF prog-id=202 op=LOAD Jan 28 04:12:56.187000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb2248d70 a2=94 a3=ffff items=0 ppid=4330 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.187000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 04:12:56.187000 audit: BPF prog-id=202 op=UNLOAD Jan 28 04:12:56.187000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcb2248d70 a2=94 a3=ffff items=0 ppid=4330 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.187000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 04:12:56.187000 audit: BPF prog-id=203 op=LOAD Jan 28 04:12:56.187000 audit[4472]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb2248db0 a2=94 a3=7ffcb2248f90 items=0 ppid=4330 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.187000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 04:12:56.187000 audit: BPF prog-id=203 op=UNLOAD Jan 28 04:12:56.187000 audit[4472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcb2248db0 a2=94 a3=7ffcb2248f90 items=0 ppid=4330 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.187000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 04:12:56.290664 systemd-networkd[1551]: vxlan.calico: Link UP Jan 28 04:12:56.295131 systemd-networkd[1551]: vxlan.calico: Gained carrier Jan 28 04:12:56.327000 audit: BPF prog-id=204 op=LOAD Jan 28 04:12:56.327000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc12589330 a2=98 a3=20 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.327000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.327000 audit: BPF prog-id=204 op=UNLOAD Jan 28 04:12:56.327000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc12589300 a3=0 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.327000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.328000 audit: BPF prog-id=205 op=LOAD Jan 28 04:12:56.328000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc12589140 a2=94 a3=54428f items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.328000 audit: BPF prog-id=205 op=UNLOAD Jan 28 04:12:56.328000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc12589140 a2=94 a3=54428f items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.328000 audit: BPF prog-id=206 op=LOAD Jan 28 04:12:56.328000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc12589170 a2=94 a3=2 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.328000 audit: BPF prog-id=206 op=UNLOAD Jan 28 04:12:56.328000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc12589170 a2=0 a3=2 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.328000 audit: BPF prog-id=207 op=LOAD Jan 28 04:12:56.328000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc12588f20 a2=94 a3=4 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.328000 audit: BPF prog-id=207 op=UNLOAD Jan 28 04:12:56.328000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc12588f20 a2=94 a3=4 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.328000 audit: BPF prog-id=208 op=LOAD Jan 28 04:12:56.328000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc12589020 a2=94 a3=7ffc125891a0 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.328000 audit: BPF prog-id=208 op=UNLOAD Jan 28 04:12:56.328000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc12589020 a2=0 a3=7ffc125891a0 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.328000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.332000 audit: BPF prog-id=209 op=LOAD Jan 28 04:12:56.332000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc12588750 a2=94 a3=2 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.332000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.332000 audit: BPF prog-id=209 op=UNLOAD Jan 28 04:12:56.332000 audit[4498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc12588750 a2=0 a3=2 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.332000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.332000 audit: BPF prog-id=210 op=LOAD Jan 28 04:12:56.332000 audit[4498]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc12588850 a2=94 a3=30 items=0 ppid=4330 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.332000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 04:12:56.342000 audit: BPF prog-id=211 op=LOAD Jan 28 04:12:56.342000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff716c69e0 a2=98 a3=0 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.342000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.343000 audit: BPF prog-id=211 op=UNLOAD Jan 28 04:12:56.343000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff716c69b0 a3=0 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.343000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.343000 audit: BPF prog-id=212 op=LOAD Jan 28 04:12:56.343000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff716c67d0 a2=94 a3=54428f items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.343000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.343000 audit: BPF prog-id=212 op=UNLOAD Jan 28 04:12:56.343000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff716c67d0 a2=94 a3=54428f items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.343000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.343000 audit: BPF prog-id=213 op=LOAD Jan 28 04:12:56.343000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff716c6800 a2=94 a3=2 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.343000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.344000 audit: BPF prog-id=213 op=UNLOAD Jan 28 04:12:56.344000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff716c6800 a2=0 a3=2 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.344000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.412306 containerd[1648]: time="2026-01-28T04:12:56.412174539Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:12:56.423930 containerd[1648]: time="2026-01-28T04:12:56.423073969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 04:12:56.423930 containerd[1648]: time="2026-01-28T04:12:56.423127179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 04:12:56.424112 kubelet[2950]: E0128 04:12:56.423410 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 04:12:56.424112 kubelet[2950]: E0128 04:12:56.423486 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 04:12:56.424239 kubelet[2950]: E0128 04:12:56.423653 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6stnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-557cdf66d6-zbq82_calico-system(597d28d3-837d-4a2a-8aed-8b9a166157ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 04:12:56.427672 kubelet[2950]: E0128 04:12:56.427587 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:12:56.475467 systemd-networkd[1551]: calic101884c6d2: Gained IPv6LL Jan 28 04:12:56.595000 audit: BPF prog-id=214 op=LOAD Jan 28 04:12:56.595000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff716c66c0 a2=94 a3=1 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.595000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.595000 audit: BPF prog-id=214 op=UNLOAD Jan 28 04:12:56.595000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff716c66c0 a2=94 a3=1 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.595000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.610000 audit: BPF prog-id=215 op=LOAD Jan 28 04:12:56.610000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff716c66b0 a2=94 a3=4 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.610000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.610000 audit: BPF prog-id=215 op=UNLOAD Jan 28 04:12:56.610000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff716c66b0 a2=0 a3=4 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.610000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.610000 audit: BPF prog-id=216 op=LOAD Jan 28 04:12:56.610000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff716c6510 a2=94 a3=5 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.610000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.610000 audit: BPF prog-id=216 op=UNLOAD Jan 28 04:12:56.610000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff716c6510 a2=0 a3=5 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.610000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.610000 audit: BPF prog-id=217 op=LOAD Jan 28 04:12:56.610000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff716c6730 a2=94 a3=6 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.610000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.611000 audit: BPF prog-id=217 op=UNLOAD Jan 28 04:12:56.611000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff716c6730 a2=0 a3=6 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.611000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.611000 audit: BPF prog-id=218 op=LOAD Jan 28 04:12:56.611000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff716c5ee0 a2=94 a3=88 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.611000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.611000 audit: BPF prog-id=219 op=LOAD Jan 28 04:12:56.611000 audit[4502]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff716c5d60 a2=94 a3=2 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.611000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.611000 audit: BPF prog-id=219 op=UNLOAD Jan 28 04:12:56.611000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff716c5d90 a2=0 a3=7fff716c5e90 items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.611000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.612000 audit: BPF prog-id=218 op=UNLOAD Jan 28 04:12:56.612000 audit[4502]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1e962d10 a2=0 a3=2186d822c33ceb4c items=0 ppid=4330 pid=4502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.612000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 04:12:56.620000 audit: BPF prog-id=210 op=UNLOAD Jan 28 04:12:56.620000 audit[4330]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000587c80 a2=0 a3=0 items=0 ppid=4297 pid=4330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.620000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 04:12:56.698000 audit[4533]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4533 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:12:56.698000 audit[4533]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffe850e780 a2=0 a3=7fffe850e76c items=0 ppid=4330 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.698000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:12:56.702000 audit[4534]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4534 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:12:56.702000 audit[4534]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdeea733c0 a2=0 a3=7ffdeea733ac items=0 ppid=4330 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.702000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:12:56.704000 audit[4532]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:12:56.704000 audit[4532]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdaf8363b0 a2=0 a3=7ffdaf83639c items=0 ppid=4330 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.704000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:12:56.733000 audit[4538]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4538 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:12:56.733000 audit[4538]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc5119c4a0 a2=0 a3=7ffc5119c48c items=0 ppid=4330 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:56.733000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:12:57.089860 kubelet[2950]: E0128 04:12:57.089687 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:12:57.196000 audit[4550]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:57.196000 audit[4550]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdcbcb9680 a2=0 a3=7ffdcbcb966c items=0 ppid=3100 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:57.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:57.202000 audit[4550]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:12:57.202000 audit[4550]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdcbcb9680 a2=0 a3=0 items=0 ppid=3100 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:12:57.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:12:57.755471 systemd-networkd[1551]: vxlan.calico: Gained IPv6LL Jan 28 04:13:01.614169 containerd[1648]: time="2026-01-28T04:13:01.613704103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xf9j7,Uid:d08f7533-ee5b-4a11-b707-6aef7c12a55d,Namespace:calico-system,Attempt:0,}" Jan 28 04:13:01.796546 systemd-networkd[1551]: cali1335855ff7f: Link UP Jan 28 04:13:01.798300 systemd-networkd[1551]: cali1335855ff7f: Gained carrier Jan 28 04:13:01.833317 containerd[1648]: 2026-01-28 04:13:01.678 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0 goldmane-666569f655- calico-system d08f7533-ee5b-4a11-b707-6aef7c12a55d 863 0 2026-01-28 04:12:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-3avyi.gb1.brightbox.com goldmane-666569f655-xf9j7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1335855ff7f [] [] }} ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Namespace="calico-system" Pod="goldmane-666569f655-xf9j7" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-" Jan 28 04:13:01.833317 containerd[1648]: 2026-01-28 04:13:01.679 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Namespace="calico-system" Pod="goldmane-666569f655-xf9j7" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" Jan 28 04:13:01.833317 containerd[1648]: 2026-01-28 04:13:01.737 [INFO][4563] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" HandleID="k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Workload="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.737 [INFO][4563] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" HandleID="k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Workload="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3avyi.gb1.brightbox.com", "pod":"goldmane-666569f655-xf9j7", "timestamp":"2026-01-28 04:13:01.737211916 +0000 UTC"}, Hostname:"srv-3avyi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.737 [INFO][4563] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.737 [INFO][4563] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.737 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3avyi.gb1.brightbox.com' Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.747 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.755 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.762 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.767 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.833612 containerd[1648]: 2026-01-28 04:13:01.770 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.834068 containerd[1648]: 2026-01-28 04:13:01.770 [INFO][4563] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.834068 containerd[1648]: 2026-01-28 04:13:01.772 [INFO][4563] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404 Jan 28 04:13:01.834068 containerd[1648]: 2026-01-28 04:13:01.778 [INFO][4563] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.834068 containerd[1648]: 2026-01-28 04:13:01.787 [INFO][4563] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.66/26] block=192.168.63.64/26 handle="k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.834068 containerd[1648]: 2026-01-28 04:13:01.787 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.66/26] handle="k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:01.834068 containerd[1648]: 2026-01-28 04:13:01.787 [INFO][4563] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 04:13:01.834068 containerd[1648]: 2026-01-28 04:13:01.787 [INFO][4563] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.66/26] IPv6=[] ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" HandleID="k8s-pod-network.95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Workload="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" Jan 28 04:13:01.834491 containerd[1648]: 2026-01-28 04:13:01.792 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Namespace="calico-system" Pod="goldmane-666569f655-xf9j7" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d08f7533-ee5b-4a11-b707-6aef7c12a55d", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-xf9j7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1335855ff7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:01.834589 containerd[1648]: 2026-01-28 04:13:01.792 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.66/32] ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Namespace="calico-system" Pod="goldmane-666569f655-xf9j7" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" Jan 28 04:13:01.834589 containerd[1648]: 2026-01-28 04:13:01.792 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1335855ff7f ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Namespace="calico-system" Pod="goldmane-666569f655-xf9j7" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" Jan 28 04:13:01.834589 containerd[1648]: 2026-01-28 04:13:01.800 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Namespace="calico-system" Pod="goldmane-666569f655-xf9j7" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" Jan 28 04:13:01.834717 containerd[1648]: 2026-01-28 04:13:01.800 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Namespace="calico-system" Pod="goldmane-666569f655-xf9j7" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d08f7533-ee5b-4a11-b707-6aef7c12a55d", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404", Pod:"goldmane-666569f655-xf9j7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1335855ff7f", MAC:"ca:b2:d5:95:0f:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:01.834806 containerd[1648]: 2026-01-28 04:13:01.820 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" Namespace="calico-system" Pod="goldmane-666569f655-xf9j7" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-goldmane--666569f655--xf9j7-eth0" Jan 28 04:13:01.858000 audit[4591]: NETFILTER_CFG table=filter:127 family=2 entries=44 op=nft_register_chain pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:01.867827 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 28 04:13:01.868048 kernel: audit: type=1325 audit(1769573581.858:668): table=filter:127 family=2 entries=44 op=nft_register_chain pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:01.871329 kernel: audit: type=1300 audit(1769573581.858:668): arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffc29e5a790 a2=0 a3=7ffc29e5a77c items=0 ppid=4330 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.858000 audit[4591]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffc29e5a790 a2=0 a3=7ffc29e5a77c items=0 ppid=4330 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.858000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:01.877914 kernel: audit: type=1327 audit(1769573581.858:668): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:01.903568 containerd[1648]: time="2026-01-28T04:13:01.903421375Z" level=info msg="connecting to shim 95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404" address="unix:///run/containerd/s/be5ac273c691cb5fcc140cfbd819d4f9844a57f15dd545bd59e03231c10df252" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:13:01.955659 systemd[1]: Started cri-containerd-95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404.scope - libcontainer container 95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404. Jan 28 04:13:01.984000 audit: BPF prog-id=220 op=LOAD Jan 28 04:13:01.987000 audit: BPF prog-id=221 op=LOAD Jan 28 04:13:01.988695 kernel: audit: type=1334 audit(1769573581.984:669): prog-id=220 op=LOAD Jan 28 04:13:01.988784 kernel: audit: type=1334 audit(1769573581.987:670): prog-id=221 op=LOAD Jan 28 04:13:01.990286 kernel: audit: type=1300 audit(1769573581.987:670): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.987000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:01.996953 kernel: audit: type=1327 audit(1769573581.987:670): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:01.987000 audit: BPF prog-id=221 op=UNLOAD Jan 28 04:13:02.000588 kernel: audit: type=1334 audit(1769573581.987:671): prog-id=221 op=UNLOAD Jan 28 04:13:01.987000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:02.003083 kernel: audit: type=1300 audit(1769573581.987:671): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:02.007879 kernel: audit: type=1327 audit(1769573581.987:671): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:01.987000 audit: BPF prog-id=222 op=LOAD Jan 28 04:13:01.987000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:01.987000 audit: BPF prog-id=223 op=LOAD Jan 28 04:13:01.987000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:01.988000 audit: BPF prog-id=223 op=UNLOAD Jan 28 04:13:01.988000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:01.988000 audit: BPF prog-id=222 op=UNLOAD Jan 28 04:13:01.988000 audit[4612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:01.988000 audit: BPF prog-id=224 op=LOAD Jan 28 04:13:01.988000 audit[4612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4601 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:01.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636131323831666535656565336566303065303964623563333363 Jan 28 04:13:02.066482 containerd[1648]: time="2026-01-28T04:13:02.066414721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xf9j7,Uid:d08f7533-ee5b-4a11-b707-6aef7c12a55d,Namespace:calico-system,Attempt:0,} returns sandbox id \"95ca1281fe5eee3ef00e09db5c33cf748f251ecfbaa0c514fadfd55c15065404\"" Jan 28 04:13:02.069030 containerd[1648]: time="2026-01-28T04:13:02.069001220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 04:13:02.424141 containerd[1648]: time="2026-01-28T04:13:02.424008705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:02.425554 containerd[1648]: time="2026-01-28T04:13:02.425490397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 04:13:02.425931 containerd[1648]: time="2026-01-28T04:13:02.425612057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:02.433604 kubelet[2950]: E0128 04:13:02.425883 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 04:13:02.433604 kubelet[2950]: E0128 04:13:02.431077 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 04:13:02.433604 kubelet[2950]: E0128 04:13:02.431363 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6z88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xf9j7_calico-system(d08f7533-ee5b-4a11-b707-6aef7c12a55d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:02.434749 kubelet[2950]: E0128 04:13:02.433773 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:13:03.105376 kubelet[2950]: E0128 04:13:03.105210 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:13:03.159000 audit[4641]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:03.159000 audit[4641]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe57c21620 a2=0 a3=7ffe57c2160c items=0 ppid=3100 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:03.159000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:03.165000 audit[4641]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4641 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:03.165000 audit[4641]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe57c21620 a2=0 a3=0 items=0 ppid=3100 pid=4641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:03.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:03.613199 containerd[1648]: time="2026-01-28T04:13:03.613122155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-8mhgz,Uid:ab23ab24-7e12-4864-a3ee-8b4882a74a22,Namespace:calico-apiserver,Attempt:0,}" Jan 28 04:13:03.614586 containerd[1648]: time="2026-01-28T04:13:03.613292115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v2j5w,Uid:128fdf3d-9296-4653-81bd-f8134dc33789,Namespace:kube-system,Attempt:0,}" Jan 28 04:13:03.614586 containerd[1648]: time="2026-01-28T04:13:03.613923561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-cstcv,Uid:a75b9d4b-fd28-4515-89fe-b1c194b4eb55,Namespace:calico-apiserver,Attempt:0,}" Jan 28 04:13:03.709380 systemd-networkd[1551]: cali1335855ff7f: Gained IPv6LL Jan 28 04:13:03.912927 systemd-networkd[1551]: calif260f824bf6: Link UP Jan 28 04:13:03.914924 systemd-networkd[1551]: calif260f824bf6: Gained carrier Jan 28 04:13:03.950156 containerd[1648]: 2026-01-28 04:13:03.755 [INFO][4664] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0 coredns-668d6bf9bc- kube-system 128fdf3d-9296-4653-81bd-f8134dc33789 855 0 2026-01-28 04:12:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-3avyi.gb1.brightbox.com coredns-668d6bf9bc-v2j5w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif260f824bf6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Namespace="kube-system" Pod="coredns-668d6bf9bc-v2j5w" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-" Jan 28 04:13:03.950156 containerd[1648]: 2026-01-28 04:13:03.756 [INFO][4664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Namespace="kube-system" Pod="coredns-668d6bf9bc-v2j5w" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" Jan 28 04:13:03.950156 containerd[1648]: 2026-01-28 04:13:03.828 [INFO][4687] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" HandleID="k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Workload="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.828 [INFO][4687] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" HandleID="k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Workload="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57f0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-3avyi.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-v2j5w", "timestamp":"2026-01-28 04:13:03.828144514 +0000 UTC"}, Hostname:"srv-3avyi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.828 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.828 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.828 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3avyi.gb1.brightbox.com' Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.848 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.859 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.868 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.872 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.950741 containerd[1648]: 2026-01-28 04:13:03.875 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.951178 containerd[1648]: 2026-01-28 04:13:03.875 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.951178 containerd[1648]: 2026-01-28 04:13:03.878 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b Jan 28 04:13:03.951178 containerd[1648]: 2026-01-28 04:13:03.883 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.951178 containerd[1648]: 2026-01-28 04:13:03.892 [INFO][4687] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.67/26] block=192.168.63.64/26 handle="k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.951178 containerd[1648]: 2026-01-28 04:13:03.892 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.67/26] handle="k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:03.951178 containerd[1648]: 2026-01-28 04:13:03.892 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 04:13:03.951178 containerd[1648]: 2026-01-28 04:13:03.892 [INFO][4687] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.67/26] IPv6=[] ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" HandleID="k8s-pod-network.ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Workload="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" Jan 28 04:13:03.951575 containerd[1648]: 2026-01-28 04:13:03.897 [INFO][4664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Namespace="kube-system" Pod="coredns-668d6bf9bc-v2j5w" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"128fdf3d-9296-4653-81bd-f8134dc33789", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-v2j5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif260f824bf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:03.951575 containerd[1648]: 2026-01-28 04:13:03.897 [INFO][4664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.67/32] ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Namespace="kube-system" Pod="coredns-668d6bf9bc-v2j5w" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" Jan 28 04:13:03.951575 containerd[1648]: 2026-01-28 04:13:03.898 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif260f824bf6 ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Namespace="kube-system" Pod="coredns-668d6bf9bc-v2j5w" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" Jan 28 04:13:03.951575 containerd[1648]: 2026-01-28 04:13:03.915 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Namespace="kube-system" Pod="coredns-668d6bf9bc-v2j5w" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" Jan 28 04:13:03.951575 containerd[1648]: 2026-01-28 04:13:03.916 [INFO][4664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Namespace="kube-system" Pod="coredns-668d6bf9bc-v2j5w" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"128fdf3d-9296-4653-81bd-f8134dc33789", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b", Pod:"coredns-668d6bf9bc-v2j5w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif260f824bf6", MAC:"82:9a:8c:6e:b6:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:03.951575 containerd[1648]: 2026-01-28 04:13:03.942 [INFO][4664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" Namespace="kube-system" Pod="coredns-668d6bf9bc-v2j5w" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--v2j5w-eth0" Jan 28 04:13:04.027000 audit[4716]: NETFILTER_CFG table=filter:130 family=2 entries=46 op=nft_register_chain pid=4716 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:04.027000 audit[4716]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffe5f1ac0a0 a2=0 a3=7ffe5f1ac08c items=0 ppid=4330 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.027000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:04.033814 containerd[1648]: time="2026-01-28T04:13:04.033679806Z" level=info msg="connecting to shim ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b" address="unix:///run/containerd/s/6b5b6ffb2970811434de65ccccffb542ce4f150ad2bf1c803c60d11f2da33526" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:13:04.043881 systemd-networkd[1551]: cali6ee5bf87e8e: Link UP Jan 28 04:13:04.045854 systemd-networkd[1551]: cali6ee5bf87e8e: Gained carrier Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.763 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0 calico-apiserver-67db4dc4b5- calico-apiserver a75b9d4b-fd28-4515-89fe-b1c194b4eb55 864 0 2026-01-28 04:12:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67db4dc4b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3avyi.gb1.brightbox.com calico-apiserver-67db4dc4b5-cstcv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6ee5bf87e8e [] [] }} ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-cstcv" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.763 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-cstcv" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.864 [INFO][4690] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" HandleID="k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.864 [INFO][4690] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" HandleID="k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032ec70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3avyi.gb1.brightbox.com", "pod":"calico-apiserver-67db4dc4b5-cstcv", "timestamp":"2026-01-28 04:13:03.864081538 +0000 UTC"}, Hostname:"srv-3avyi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.864 [INFO][4690] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.892 [INFO][4690] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.892 [INFO][4690] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3avyi.gb1.brightbox.com' Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.947 [INFO][4690] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.961 [INFO][4690] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.972 [INFO][4690] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.975 [INFO][4690] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.980 [INFO][4690] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.980 [INFO][4690] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.983 [INFO][4690] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:03.995 [INFO][4690] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:04.016 [INFO][4690] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.68/26] block=192.168.63.64/26 handle="k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:04.016 [INFO][4690] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.68/26] handle="k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:04.016 [INFO][4690] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 04:13:04.095041 containerd[1648]: 2026-01-28 04:13:04.016 [INFO][4690] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.68/26] IPv6=[] ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" HandleID="k8s-pod-network.5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" Jan 28 04:13:04.096109 containerd[1648]: 2026-01-28 04:13:04.026 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-cstcv" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0", GenerateName:"calico-apiserver-67db4dc4b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a75b9d4b-fd28-4515-89fe-b1c194b4eb55", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67db4dc4b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-67db4dc4b5-cstcv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ee5bf87e8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:04.096109 containerd[1648]: 2026-01-28 04:13:04.027 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.68/32] ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-cstcv" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" Jan 28 04:13:04.096109 containerd[1648]: 2026-01-28 04:13:04.027 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ee5bf87e8e ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-cstcv" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" Jan 28 04:13:04.096109 containerd[1648]: 2026-01-28 04:13:04.047 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-cstcv" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" Jan 28 04:13:04.096109 containerd[1648]: 2026-01-28 04:13:04.049 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-cstcv" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0", GenerateName:"calico-apiserver-67db4dc4b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a75b9d4b-fd28-4515-89fe-b1c194b4eb55", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67db4dc4b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc", Pod:"calico-apiserver-67db4dc4b5-cstcv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ee5bf87e8e", MAC:"3a:3d:58:61:bf:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:04.096109 containerd[1648]: 2026-01-28 04:13:04.083 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-cstcv" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--cstcv-eth0" Jan 28 04:13:04.121904 systemd[1]: Started cri-containerd-ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b.scope - libcontainer container ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b. Jan 28 04:13:04.172207 systemd-networkd[1551]: calib0d921d88e7: Link UP Jan 28 04:13:04.173000 audit[4767]: NETFILTER_CFG table=filter:131 family=2 entries=58 op=nft_register_chain pid=4767 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:04.173000 audit[4767]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7ffd985932d0 a2=0 a3=7ffd985932bc items=0 ppid=4330 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.173000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:04.176000 audit: BPF prog-id=225 op=LOAD Jan 28 04:13:04.177889 systemd-networkd[1551]: calib0d921d88e7: Gained carrier Jan 28 04:13:04.181000 audit: BPF prog-id=226 op=LOAD Jan 28 04:13:04.181000 audit[4737]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4724 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366463386565326538323738346438636337383966653335663463 Jan 28 04:13:04.182000 audit: BPF prog-id=226 op=UNLOAD Jan 28 04:13:04.182000 audit[4737]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366463386565326538323738346438636337383966653335663463 Jan 28 04:13:04.182000 audit: BPF prog-id=227 op=LOAD Jan 28 04:13:04.182000 audit[4737]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4724 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366463386565326538323738346438636337383966653335663463 Jan 28 04:13:04.183000 audit: BPF prog-id=228 op=LOAD Jan 28 04:13:04.183000 audit[4737]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4724 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366463386565326538323738346438636337383966653335663463 Jan 28 04:13:04.183000 audit: BPF prog-id=228 op=UNLOAD Jan 28 04:13:04.183000 audit[4737]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366463386565326538323738346438636337383966653335663463 Jan 28 04:13:04.183000 audit: BPF prog-id=227 op=UNLOAD Jan 28 04:13:04.183000 audit[4737]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366463386565326538323738346438636337383966653335663463 Jan 28 04:13:04.183000 audit: BPF prog-id=229 op=LOAD Jan 28 04:13:04.183000 audit[4737]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4724 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566366463386565326538323738346438636337383966653335663463 Jan 28 04:13:04.198558 containerd[1648]: time="2026-01-28T04:13:04.198498631Z" level=info msg="connecting to shim 5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc" address="unix:///run/containerd/s/de89779c722e3323e460e08c8a249d15926b9a6b6f6be6083216c980b39df971" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:03.741 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0 calico-apiserver-67db4dc4b5- calico-apiserver ab23ab24-7e12-4864-a3ee-8b4882a74a22 860 0 2026-01-28 04:12:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67db4dc4b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3avyi.gb1.brightbox.com calico-apiserver-67db4dc4b5-8mhgz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib0d921d88e7 [] [] }} ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-8mhgz" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:03.742 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-8mhgz" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:03.870 [INFO][4681] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" HandleID="k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:03.870 [INFO][4681] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" HandleID="k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d8e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3avyi.gb1.brightbox.com", "pod":"calico-apiserver-67db4dc4b5-8mhgz", "timestamp":"2026-01-28 04:13:03.870278063 +0000 UTC"}, Hostname:"srv-3avyi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:03.870 [INFO][4681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.017 [INFO][4681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.017 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3avyi.gb1.brightbox.com' Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.052 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.064 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.083 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.094 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.109 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.110 [INFO][4681] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.114 [INFO][4681] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37 Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.127 [INFO][4681] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.140 [INFO][4681] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.69/26] block=192.168.63.64/26 handle="k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.141 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.69/26] handle="k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.142 [INFO][4681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 04:13:04.212992 containerd[1648]: 2026-01-28 04:13:04.142 [INFO][4681] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.69/26] IPv6=[] ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" HandleID="k8s-pod-network.af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" Jan 28 04:13:04.216325 containerd[1648]: 2026-01-28 04:13:04.149 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-8mhgz" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0", GenerateName:"calico-apiserver-67db4dc4b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab23ab24-7e12-4864-a3ee-8b4882a74a22", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67db4dc4b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-67db4dc4b5-8mhgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib0d921d88e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:04.216325 containerd[1648]: 2026-01-28 04:13:04.151 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.69/32] ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-8mhgz" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" Jan 28 04:13:04.216325 containerd[1648]: 2026-01-28 04:13:04.151 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0d921d88e7 ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-8mhgz" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" Jan 28 04:13:04.216325 containerd[1648]: 2026-01-28 04:13:04.187 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-8mhgz" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" Jan 28 04:13:04.216325 containerd[1648]: 2026-01-28 04:13:04.189 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-8mhgz" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0", GenerateName:"calico-apiserver-67db4dc4b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab23ab24-7e12-4864-a3ee-8b4882a74a22", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67db4dc4b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37", Pod:"calico-apiserver-67db4dc4b5-8mhgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib0d921d88e7", MAC:"0e:00:fb:39:65:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:04.216325 containerd[1648]: 2026-01-28 04:13:04.207 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" Namespace="calico-apiserver" Pod="calico-apiserver-67db4dc4b5-8mhgz" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--apiserver--67db4dc4b5--8mhgz-eth0" Jan 28 04:13:04.272569 systemd[1]: Started cri-containerd-5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc.scope - libcontainer container 5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc. Jan 28 04:13:04.279160 containerd[1648]: time="2026-01-28T04:13:04.277990676Z" level=info msg="connecting to shim af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37" address="unix:///run/containerd/s/b27ad8873fff46f0546c7fd6aeb139e0e40c1f62eec567fd84e70cfd05d5a671" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:13:04.322692 containerd[1648]: time="2026-01-28T04:13:04.322629365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v2j5w,Uid:128fdf3d-9296-4653-81bd-f8134dc33789,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b\"" Jan 28 04:13:04.325000 audit: BPF prog-id=230 op=LOAD Jan 28 04:13:04.325000 audit: BPF prog-id=231 op=LOAD Jan 28 04:13:04.325000 audit[4797]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565666161663865363930303437656131623039626165343464396234 Jan 28 04:13:04.326000 audit: BPF prog-id=231 op=UNLOAD Jan 28 04:13:04.326000 audit[4797]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565666161663865363930303437656131623039626165343464396234 Jan 28 04:13:04.328000 audit: BPF prog-id=232 op=LOAD Jan 28 04:13:04.328000 audit[4797]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565666161663865363930303437656131623039626165343464396234 Jan 28 04:13:04.328000 audit: BPF prog-id=233 op=LOAD Jan 28 04:13:04.328000 audit[4797]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565666161663865363930303437656131623039626165343464396234 Jan 28 04:13:04.328000 audit: BPF prog-id=233 op=UNLOAD Jan 28 04:13:04.328000 audit[4797]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565666161663865363930303437656131623039626165343464396234 Jan 28 04:13:04.328000 audit: BPF prog-id=232 op=UNLOAD Jan 28 04:13:04.328000 audit[4797]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565666161663865363930303437656131623039626165343464396234 Jan 28 04:13:04.328000 audit: BPF prog-id=234 op=LOAD Jan 28 04:13:04.328000 audit[4797]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4779 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565666161663865363930303437656131623039626165343464396234 Jan 28 04:13:04.327000 audit[4835]: NETFILTER_CFG table=filter:132 family=2 entries=49 op=nft_register_chain pid=4835 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:04.327000 audit[4835]: SYSCALL arch=c000003e syscall=46 success=yes exit=25452 a0=3 a1=7ffd42bb4670 a2=0 a3=7ffd42bb465c items=0 ppid=4330 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.327000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:04.337618 containerd[1648]: time="2026-01-28T04:13:04.337068222Z" level=info msg="CreateContainer within sandbox \"ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 04:13:04.353891 systemd[1]: Started cri-containerd-af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37.scope - libcontainer container af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37. Jan 28 04:13:04.362472 containerd[1648]: time="2026-01-28T04:13:04.362413129Z" level=info msg="Container 22273934c96162e92b29373ad4fc2dcc07e9bdd559b5e5b6ed9d0de0193cc380: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:13:04.372638 containerd[1648]: time="2026-01-28T04:13:04.372597151Z" level=info msg="CreateContainer within sandbox \"ef6dc8ee2e82784d8cc789fe35f4c8ce8231b19bf1d6d9e96b9fec14b49e880b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22273934c96162e92b29373ad4fc2dcc07e9bdd559b5e5b6ed9d0de0193cc380\"" Jan 28 04:13:04.374885 containerd[1648]: time="2026-01-28T04:13:04.374850471Z" level=info msg="StartContainer for \"22273934c96162e92b29373ad4fc2dcc07e9bdd559b5e5b6ed9d0de0193cc380\"" Jan 28 04:13:04.376112 containerd[1648]: time="2026-01-28T04:13:04.376066761Z" level=info msg="connecting to shim 22273934c96162e92b29373ad4fc2dcc07e9bdd559b5e5b6ed9d0de0193cc380" address="unix:///run/containerd/s/6b5b6ffb2970811434de65ccccffb542ce4f150ad2bf1c803c60d11f2da33526" protocol=ttrpc version=3 Jan 28 04:13:04.408000 audit: BPF prog-id=235 op=LOAD Jan 28 04:13:04.410000 audit: BPF prog-id=236 op=LOAD Jan 28 04:13:04.410000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4819 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316531366335366439333865633436393338303861363764326266 Jan 28 04:13:04.410000 audit: BPF prog-id=236 op=UNLOAD Jan 28 04:13:04.410000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4819 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316531366335366439333865633436393338303861363764326266 Jan 28 04:13:04.411000 audit: BPF prog-id=237 op=LOAD Jan 28 04:13:04.411000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4819 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316531366335366439333865633436393338303861363764326266 Jan 28 04:13:04.413000 audit: BPF prog-id=238 op=LOAD Jan 28 04:13:04.413000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4819 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316531366335366439333865633436393338303861363764326266 Jan 28 04:13:04.413000 audit: BPF prog-id=238 op=UNLOAD Jan 28 04:13:04.413000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4819 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316531366335366439333865633436393338303861363764326266 Jan 28 04:13:04.413000 audit: BPF prog-id=237 op=UNLOAD Jan 28 04:13:04.413000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4819 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316531366335366439333865633436393338303861363764326266 Jan 28 04:13:04.413000 audit: BPF prog-id=239 op=LOAD Jan 28 04:13:04.413000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4819 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166316531366335366439333865633436393338303861363764326266 Jan 28 04:13:04.419553 systemd[1]: Started cri-containerd-22273934c96162e92b29373ad4fc2dcc07e9bdd559b5e5b6ed9d0de0193cc380.scope - libcontainer container 22273934c96162e92b29373ad4fc2dcc07e9bdd559b5e5b6ed9d0de0193cc380. Jan 28 04:13:04.432030 containerd[1648]: time="2026-01-28T04:13:04.431422553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-cstcv,Uid:a75b9d4b-fd28-4515-89fe-b1c194b4eb55,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5efaaf8e690047ea1b09bae44d9b466ac8739525c1a37d3660c2945448408edc\"" Jan 28 04:13:04.436281 containerd[1648]: time="2026-01-28T04:13:04.435999101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 04:13:04.463000 audit: BPF prog-id=240 op=LOAD Jan 28 04:13:04.465000 audit: BPF prog-id=241 op=LOAD Jan 28 04:13:04.465000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4724 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323733393334633936313632653932623239333733616434666332 Jan 28 04:13:04.468000 audit: BPF prog-id=241 op=UNLOAD Jan 28 04:13:04.468000 audit[4864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323733393334633936313632653932623239333733616434666332 Jan 28 04:13:04.468000 audit: BPF prog-id=242 op=LOAD Jan 28 04:13:04.468000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4724 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323733393334633936313632653932623239333733616434666332 Jan 28 04:13:04.468000 audit: BPF prog-id=243 op=LOAD Jan 28 04:13:04.468000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4724 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323733393334633936313632653932623239333733616434666332 Jan 28 04:13:04.469000 audit: BPF prog-id=243 op=UNLOAD Jan 28 04:13:04.469000 audit[4864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323733393334633936313632653932623239333733616434666332 Jan 28 04:13:04.469000 audit: BPF prog-id=242 op=UNLOAD Jan 28 04:13:04.469000 audit[4864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4724 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323733393334633936313632653932623239333733616434666332 Jan 28 04:13:04.469000 audit: BPF prog-id=244 op=LOAD Jan 28 04:13:04.469000 audit[4864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4724 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232323733393334633936313632653932623239333733616434666332 Jan 28 04:13:04.521640 containerd[1648]: time="2026-01-28T04:13:04.521563388Z" level=info msg="StartContainer for \"22273934c96162e92b29373ad4fc2dcc07e9bdd559b5e5b6ed9d0de0193cc380\" returns successfully" Jan 28 04:13:04.541566 containerd[1648]: time="2026-01-28T04:13:04.541502278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67db4dc4b5-8mhgz,Uid:ab23ab24-7e12-4864-a3ee-8b4882a74a22,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af1e16c56d938ec4693808a67d2bfa8472edf7e547e35099fbcaa1fe6e9c5e37\"" Jan 28 04:13:04.616605 containerd[1648]: time="2026-01-28T04:13:04.616339555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sskjw,Uid:1c614f8b-7e15-4f62-a1d7-df2d998fe9fb,Namespace:kube-system,Attempt:0,}" Jan 28 04:13:04.770661 containerd[1648]: time="2026-01-28T04:13:04.770454339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:04.785094 containerd[1648]: time="2026-01-28T04:13:04.785015684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 04:13:04.785689 containerd[1648]: time="2026-01-28T04:13:04.785167774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:04.785900 kubelet[2950]: E0128 04:13:04.785783 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:04.785900 kubelet[2950]: E0128 04:13:04.785892 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:04.788177 containerd[1648]: time="2026-01-28T04:13:04.788111934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 04:13:04.788624 kubelet[2950]: E0128 04:13:04.788543 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r55sg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67db4dc4b5-cstcv_calico-apiserver(a75b9d4b-fd28-4515-89fe-b1c194b4eb55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:04.791094 kubelet[2950]: E0128 04:13:04.791047 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:13:04.884372 systemd-networkd[1551]: cali26ab0c5260e: Link UP Jan 28 04:13:04.884684 systemd-networkd[1551]: cali26ab0c5260e: Gained carrier Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.716 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0 coredns-668d6bf9bc- kube-system 1c614f8b-7e15-4f62-a1d7-df2d998fe9fb 862 0 2026-01-28 04:12:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-3avyi.gb1.brightbox.com coredns-668d6bf9bc-sskjw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26ab0c5260e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-sskjw" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.717 [INFO][4906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-sskjw" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.792 [INFO][4919] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" HandleID="k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Workload="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.793 [INFO][4919] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" HandleID="k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Workload="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039d820), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-3avyi.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-sskjw", "timestamp":"2026-01-28 04:13:04.792731923 +0000 UTC"}, Hostname:"srv-3avyi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.793 [INFO][4919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.793 [INFO][4919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.793 [INFO][4919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3avyi.gb1.brightbox.com' Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.808 [INFO][4919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.835 [INFO][4919] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.846 [INFO][4919] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.849 [INFO][4919] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.852 [INFO][4919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.852 [INFO][4919] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.855 [INFO][4919] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8 Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.861 [INFO][4919] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.872 [INFO][4919] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.70/26] block=192.168.63.64/26 handle="k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.872 [INFO][4919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.70/26] handle="k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.873 [INFO][4919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 04:13:04.912882 containerd[1648]: 2026-01-28 04:13:04.873 [INFO][4919] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.70/26] IPv6=[] ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" HandleID="k8s-pod-network.5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Workload="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" Jan 28 04:13:04.914043 containerd[1648]: 2026-01-28 04:13:04.876 [INFO][4906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-sskjw" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1c614f8b-7e15-4f62-a1d7-df2d998fe9fb", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-sskjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26ab0c5260e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:04.914043 containerd[1648]: 2026-01-28 04:13:04.876 [INFO][4906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.70/32] ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-sskjw" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" Jan 28 04:13:04.914043 containerd[1648]: 2026-01-28 04:13:04.876 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26ab0c5260e ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-sskjw" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" Jan 28 04:13:04.914043 containerd[1648]: 2026-01-28 04:13:04.883 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-sskjw" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" Jan 28 04:13:04.914043 containerd[1648]: 2026-01-28 04:13:04.889 [INFO][4906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-sskjw" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1c614f8b-7e15-4f62-a1d7-df2d998fe9fb", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8", Pod:"coredns-668d6bf9bc-sskjw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26ab0c5260e", MAC:"9e:33:20:14:17:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:04.914043 containerd[1648]: 2026-01-28 04:13:04.905 [INFO][4906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-sskjw" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sskjw-eth0" Jan 28 04:13:04.947835 containerd[1648]: time="2026-01-28T04:13:04.947604212Z" level=info msg="connecting to shim 5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8" address="unix:///run/containerd/s/77bdcff740234177a44f6175c191a974fd03946261d6706a81a644ef5d1869de" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:13:04.956000 audit[4950]: NETFILTER_CFG table=filter:133 family=2 entries=48 op=nft_register_chain pid=4950 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:04.956000 audit[4950]: SYSCALL arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7ffd5c3bb830 a2=0 a3=7ffd5c3bb81c items=0 ppid=4330 pid=4950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:04.956000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:04.988538 systemd[1]: Started cri-containerd-5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8.scope - libcontainer container 5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8. Jan 28 04:13:05.008000 audit: BPF prog-id=245 op=LOAD Jan 28 04:13:05.009000 audit: BPF prog-id=246 op=LOAD Jan 28 04:13:05.009000 audit[4957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353166616237353763663836353062353966343039383736346264 Jan 28 04:13:05.009000 audit: BPF prog-id=246 op=UNLOAD Jan 28 04:13:05.009000 audit[4957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353166616237353763663836353062353966343039383736346264 Jan 28 04:13:05.010000 audit: BPF prog-id=247 op=LOAD Jan 28 04:13:05.010000 audit[4957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353166616237353763663836353062353966343039383736346264 Jan 28 04:13:05.010000 audit: BPF prog-id=248 op=LOAD Jan 28 04:13:05.010000 audit[4957]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353166616237353763663836353062353966343039383736346264 Jan 28 04:13:05.010000 audit: BPF prog-id=248 op=UNLOAD Jan 28 04:13:05.010000 audit[4957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353166616237353763663836353062353966343039383736346264 Jan 28 04:13:05.011000 audit: BPF prog-id=247 op=UNLOAD Jan 28 04:13:05.011000 audit[4957]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353166616237353763663836353062353966343039383736346264 Jan 28 04:13:05.011000 audit: BPF prog-id=249 op=LOAD Jan 28 04:13:05.011000 audit[4957]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4945 pid=4957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535353166616237353763663836353062353966343039383736346264 Jan 28 04:13:05.068807 containerd[1648]: time="2026-01-28T04:13:05.068623303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sskjw,Uid:1c614f8b-7e15-4f62-a1d7-df2d998fe9fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8\"" Jan 28 04:13:05.074610 containerd[1648]: time="2026-01-28T04:13:05.074558436Z" level=info msg="CreateContainer within sandbox \"5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 04:13:05.104055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831721275.mount: Deactivated successfully. Jan 28 04:13:05.117789 containerd[1648]: time="2026-01-28T04:13:05.117743266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:05.118354 containerd[1648]: time="2026-01-28T04:13:05.118316427Z" level=info msg="Container b41f2b68e2de66acf87d2ead80c137d80752735c8a168e1aa7cb148910622fa7: CDI devices from CRI Config.CDIDevices: []" Jan 28 04:13:05.119012 containerd[1648]: time="2026-01-28T04:13:05.118973153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 04:13:05.119163 containerd[1648]: time="2026-01-28T04:13:05.119134841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:05.119516 kubelet[2950]: E0128 04:13:05.119426 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:05.119641 kubelet[2950]: E0128 04:13:05.119526 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:05.119884 kubelet[2950]: E0128 04:13:05.119780 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6p6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67db4dc4b5-8mhgz_calico-apiserver(ab23ab24-7e12-4864-a3ee-8b4882a74a22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:05.121364 kubelet[2950]: E0128 04:13:05.121326 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:13:05.130544 containerd[1648]: time="2026-01-28T04:13:05.130243444Z" level=info msg="CreateContainer within sandbox \"5551fab757cf8650b59f4098764bdc762477b96f5dd2213352f511e855d980e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b41f2b68e2de66acf87d2ead80c137d80752735c8a168e1aa7cb148910622fa7\"" Jan 28 04:13:05.132641 containerd[1648]: time="2026-01-28T04:13:05.132612257Z" level=info msg="StartContainer for \"b41f2b68e2de66acf87d2ead80c137d80752735c8a168e1aa7cb148910622fa7\"" Jan 28 04:13:05.134302 kubelet[2950]: E0128 04:13:05.132250 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:13:05.137020 kubelet[2950]: E0128 04:13:05.136501 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:13:05.138030 containerd[1648]: time="2026-01-28T04:13:05.137988988Z" level=info msg="connecting to shim b41f2b68e2de66acf87d2ead80c137d80752735c8a168e1aa7cb148910622fa7" address="unix:///run/containerd/s/77bdcff740234177a44f6175c191a974fd03946261d6706a81a644ef5d1869de" protocol=ttrpc version=3 Jan 28 04:13:05.188538 systemd[1]: Started cri-containerd-b41f2b68e2de66acf87d2ead80c137d80752735c8a168e1aa7cb148910622fa7.scope - libcontainer container b41f2b68e2de66acf87d2ead80c137d80752735c8a168e1aa7cb148910622fa7. Jan 28 04:13:05.219000 audit[5002]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:05.219000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff0d67bc50 a2=0 a3=7fff0d67bc3c items=0 ppid=3100 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:05.224000 audit[5002]: NETFILTER_CFG table=nat:135 family=2 entries=14 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:05.224000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff0d67bc50 a2=0 a3=0 items=0 ppid=3100 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:05.243478 systemd-networkd[1551]: calif260f824bf6: Gained IPv6LL Jan 28 04:13:05.251000 audit: BPF prog-id=250 op=LOAD Jan 28 04:13:05.254000 audit: BPF prog-id=251 op=LOAD Jan 28 04:13:05.254000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4945 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316632623638653264653636616366383764326561643830633133 Jan 28 04:13:05.254000 audit: BPF prog-id=251 op=UNLOAD Jan 28 04:13:05.254000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316632623638653264653636616366383764326561643830633133 Jan 28 04:13:05.255000 audit: BPF prog-id=252 op=LOAD Jan 28 04:13:05.255000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4945 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316632623638653264653636616366383764326561643830633133 Jan 28 04:13:05.255000 audit: BPF prog-id=253 op=LOAD Jan 28 04:13:05.255000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4945 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316632623638653264653636616366383764326561643830633133 Jan 28 04:13:05.255000 audit: BPF prog-id=253 op=UNLOAD Jan 28 04:13:05.255000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316632623638653264653636616366383764326561643830633133 Jan 28 04:13:05.255000 audit: BPF prog-id=252 op=UNLOAD Jan 28 04:13:05.255000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4945 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316632623638653264653636616366383764326561643830633133 Jan 28 04:13:05.256000 audit: BPF prog-id=254 op=LOAD Jan 28 04:13:05.256000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4945 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.256000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316632623638653264653636616366383764326561643830633133 Jan 28 04:13:05.262000 audit[5004]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:05.262000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc719c760 a2=0 a3=7ffcc719c74c items=0 ppid=3100 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:05.268000 audit[5004]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:05.268000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcc719c760 a2=0 a3=0 items=0 ppid=3100 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:05.304711 containerd[1648]: time="2026-01-28T04:13:05.304646357Z" level=info msg="StartContainer for \"b41f2b68e2de66acf87d2ead80c137d80752735c8a168e1aa7cb148910622fa7\" returns successfully" Jan 28 04:13:05.614813 containerd[1648]: time="2026-01-28T04:13:05.614707576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnlhb,Uid:c2a88baa-8755-4a0f-b81e-f2ef466fcd2d,Namespace:calico-system,Attempt:0,}" Jan 28 04:13:05.628454 systemd-networkd[1551]: calib0d921d88e7: Gained IPv6LL Jan 28 04:13:05.691631 systemd-networkd[1551]: cali6ee5bf87e8e: Gained IPv6LL Jan 28 04:13:05.840763 systemd-networkd[1551]: cali7f5aff1ad63: Link UP Jan 28 04:13:05.841854 systemd-networkd[1551]: cali7f5aff1ad63: Gained carrier Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.711 [INFO][5020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0 csi-node-driver- calico-system c2a88baa-8755-4a0f-b81e-f2ef466fcd2d 739 0 2026-01-28 04:12:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-3avyi.gb1.brightbox.com csi-node-driver-bnlhb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7f5aff1ad63 [] [] }} ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Namespace="calico-system" Pod="csi-node-driver-bnlhb" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.711 [INFO][5020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Namespace="calico-system" Pod="csi-node-driver-bnlhb" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.771 [INFO][5031] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" HandleID="k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Workload="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.772 [INFO][5031] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" HandleID="k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Workload="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3avyi.gb1.brightbox.com", "pod":"csi-node-driver-bnlhb", "timestamp":"2026-01-28 04:13:05.771017181 +0000 UTC"}, Hostname:"srv-3avyi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.772 [INFO][5031] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.772 [INFO][5031] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.772 [INFO][5031] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3avyi.gb1.brightbox.com' Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.782 [INFO][5031] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.794 [INFO][5031] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.802 [INFO][5031] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.805 [INFO][5031] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.810 [INFO][5031] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.810 [INFO][5031] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.813 [INFO][5031] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59 Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.821 [INFO][5031] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.830 [INFO][5031] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.71/26] block=192.168.63.64/26 handle="k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.831 [INFO][5031] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.71/26] handle="k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.831 [INFO][5031] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 04:13:05.875401 containerd[1648]: 2026-01-28 04:13:05.831 [INFO][5031] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.71/26] IPv6=[] ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" HandleID="k8s-pod-network.c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Workload="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" Jan 28 04:13:05.877682 containerd[1648]: 2026-01-28 04:13:05.834 [INFO][5020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Namespace="calico-system" Pod="csi-node-driver-bnlhb" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2a88baa-8755-4a0f-b81e-f2ef466fcd2d", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-bnlhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f5aff1ad63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:05.877682 containerd[1648]: 2026-01-28 04:13:05.834 [INFO][5020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.71/32] ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Namespace="calico-system" Pod="csi-node-driver-bnlhb" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" Jan 28 04:13:05.877682 containerd[1648]: 2026-01-28 04:13:05.834 [INFO][5020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f5aff1ad63 ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Namespace="calico-system" Pod="csi-node-driver-bnlhb" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" Jan 28 04:13:05.877682 containerd[1648]: 2026-01-28 04:13:05.843 [INFO][5020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Namespace="calico-system" Pod="csi-node-driver-bnlhb" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" Jan 28 04:13:05.877682 containerd[1648]: 2026-01-28 04:13:05.845 [INFO][5020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Namespace="calico-system" Pod="csi-node-driver-bnlhb" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2a88baa-8755-4a0f-b81e-f2ef466fcd2d", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59", Pod:"csi-node-driver-bnlhb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7f5aff1ad63", MAC:"b6:68:c3:27:bc:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:05.877682 containerd[1648]: 2026-01-28 04:13:05.864 [INFO][5020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" Namespace="calico-system" Pod="csi-node-driver-bnlhb" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-csi--node--driver--bnlhb-eth0" Jan 28 04:13:05.888136 kubelet[2950]: I0128 04:13:05.886080 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v2j5w" podStartSLOduration=64.863063108 podStartE2EDuration="1m4.863063108s" podCreationTimestamp="2026-01-28 04:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 04:13:05.237557497 +0000 UTC m=+70.849511281" watchObservedRunningTime="2026-01-28 04:13:05.863063108 +0000 UTC m=+71.475016894" Jan 28 04:13:05.907000 audit[5046]: NETFILTER_CFG table=filter:138 family=2 entries=56 op=nft_register_chain pid=5046 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:05.907000 audit[5046]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7ffc72444960 a2=0 a3=7ffc7244494c items=0 ppid=4330 pid=5046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:05.907000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:05.933306 containerd[1648]: time="2026-01-28T04:13:05.932911996Z" level=info msg="connecting to shim c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59" address="unix:///run/containerd/s/b217128f30cafbd95bdcc74456d02ccdbf88d1e559ddba168b13f96ce4f77ec6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:13:05.997629 systemd[1]: Started cri-containerd-c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59.scope - libcontainer container c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59. Jan 28 04:13:06.015000 audit: BPF prog-id=255 op=LOAD Jan 28 04:13:06.016000 audit: BPF prog-id=256 op=LOAD Jan 28 04:13:06.016000 audit[5067]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5056 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332383931656566383532393335656530386132366230326365353536 Jan 28 04:13:06.016000 audit: BPF prog-id=256 op=UNLOAD Jan 28 04:13:06.016000 audit[5067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5056 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332383931656566383532393335656530386132366230326365353536 Jan 28 04:13:06.016000 audit: BPF prog-id=257 op=LOAD Jan 28 04:13:06.016000 audit[5067]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5056 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332383931656566383532393335656530386132366230326365353536 Jan 28 04:13:06.016000 audit: BPF prog-id=258 op=LOAD Jan 28 04:13:06.016000 audit[5067]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5056 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332383931656566383532393335656530386132366230326365353536 Jan 28 04:13:06.016000 audit: BPF prog-id=258 op=UNLOAD Jan 28 04:13:06.016000 audit[5067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5056 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332383931656566383532393335656530386132366230326365353536 Jan 28 04:13:06.016000 audit: BPF prog-id=257 op=UNLOAD Jan 28 04:13:06.016000 audit[5067]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5056 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332383931656566383532393335656530386132366230326365353536 Jan 28 04:13:06.016000 audit: BPF prog-id=259 op=LOAD Jan 28 04:13:06.016000 audit[5067]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5056 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332383931656566383532393335656530386132366230326365353536 Jan 28 04:13:06.049046 containerd[1648]: time="2026-01-28T04:13:06.048945264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnlhb,Uid:c2a88baa-8755-4a0f-b81e-f2ef466fcd2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2891eef852935ee08a26b02ce556f7799c4c2524872564c8496d1ae51fccf59\"" Jan 28 04:13:06.052798 containerd[1648]: time="2026-01-28T04:13:06.052555980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 04:13:06.161595 kubelet[2950]: E0128 04:13:06.161520 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:13:06.164781 kubelet[2950]: E0128 04:13:06.164745 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:13:06.207046 kubelet[2950]: I0128 04:13:06.206926 2950 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sskjw" podStartSLOduration=65.206895741 podStartE2EDuration="1m5.206895741s" podCreationTimestamp="2026-01-28 04:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 04:13:06.189406935 +0000 UTC m=+71.801360743" watchObservedRunningTime="2026-01-28 04:13:06.206895741 +0000 UTC m=+71.818849527" Jan 28 04:13:06.233000 audit[5094]: NETFILTER_CFG table=filter:139 family=2 entries=17 op=nft_register_rule pid=5094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:06.233000 audit[5094]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca7cfb790 a2=0 a3=7ffca7cfb77c items=0 ppid=3100 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.233000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:06.254000 audit[5094]: NETFILTER_CFG table=nat:140 family=2 entries=35 op=nft_register_chain pid=5094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:06.254000 audit[5094]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffca7cfb790 a2=0 a3=7ffca7cfb77c items=0 ppid=3100 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:06.388365 containerd[1648]: time="2026-01-28T04:13:06.388205815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:06.390088 containerd[1648]: time="2026-01-28T04:13:06.389830938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 04:13:06.390088 containerd[1648]: time="2026-01-28T04:13:06.389917641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:06.390466 kubelet[2950]: E0128 04:13:06.390098 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 04:13:06.390466 kubelet[2950]: E0128 04:13:06.390174 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 04:13:06.390466 kubelet[2950]: E0128 04:13:06.390370 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:06.392700 containerd[1648]: time="2026-01-28T04:13:06.392667682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 04:13:06.614810 containerd[1648]: time="2026-01-28T04:13:06.614456883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5877564c64-ssm6r,Uid:53f99505-aca3-4278-8799-01f0eba5681f,Namespace:calico-system,Attempt:0,}" Jan 28 04:13:06.695314 containerd[1648]: time="2026-01-28T04:13:06.695136433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:06.696615 containerd[1648]: time="2026-01-28T04:13:06.696424989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 04:13:06.696775 containerd[1648]: time="2026-01-28T04:13:06.696446081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:06.698672 kubelet[2950]: E0128 04:13:06.698611 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 04:13:06.698817 kubelet[2950]: E0128 04:13:06.698691 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 04:13:06.699311 kubelet[2950]: E0128 04:13:06.699221 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:06.701280 kubelet[2950]: E0128 04:13:06.701160 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:13:06.812812 systemd-networkd[1551]: caliab8f3622b48: Link UP Jan 28 04:13:06.813943 systemd-networkd[1551]: caliab8f3622b48: Gained carrier Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.684 [INFO][5095] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0 calico-kube-controllers-5877564c64- calico-system 53f99505-aca3-4278-8799-01f0eba5681f 866 0 2026-01-28 04:12:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5877564c64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-3avyi.gb1.brightbox.com calico-kube-controllers-5877564c64-ssm6r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliab8f3622b48 [] [] }} ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Namespace="calico-system" Pod="calico-kube-controllers-5877564c64-ssm6r" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.684 [INFO][5095] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Namespace="calico-system" Pod="calico-kube-controllers-5877564c64-ssm6r" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.740 [INFO][5107] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" HandleID="k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.742 [INFO][5107] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" HandleID="k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3avyi.gb1.brightbox.com", "pod":"calico-kube-controllers-5877564c64-ssm6r", "timestamp":"2026-01-28 04:13:06.740902483 +0000 UTC"}, Hostname:"srv-3avyi.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.742 [INFO][5107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.743 [INFO][5107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.743 [INFO][5107] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3avyi.gb1.brightbox.com' Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.761 [INFO][5107] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.772 [INFO][5107] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.778 [INFO][5107] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.781 [INFO][5107] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.785 [INFO][5107] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.785 [INFO][5107] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.787 [INFO][5107] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019 Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.793 [INFO][5107] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.804 [INFO][5107] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.72/26] block=192.168.63.64/26 handle="k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.804 [INFO][5107] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.72/26] handle="k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" host="srv-3avyi.gb1.brightbox.com" Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.804 [INFO][5107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 04:13:06.842840 containerd[1648]: 2026-01-28 04:13:06.805 [INFO][5107] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.72/26] IPv6=[] ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" HandleID="k8s-pod-network.6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Workload="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" Jan 28 04:13:06.844485 containerd[1648]: 2026-01-28 04:13:06.808 [INFO][5095] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Namespace="calico-system" Pod="calico-kube-controllers-5877564c64-ssm6r" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0", GenerateName:"calico-kube-controllers-5877564c64-", Namespace:"calico-system", SelfLink:"", UID:"53f99505-aca3-4278-8799-01f0eba5681f", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5877564c64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5877564c64-ssm6r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab8f3622b48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:06.844485 containerd[1648]: 2026-01-28 04:13:06.808 [INFO][5095] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.72/32] ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Namespace="calico-system" Pod="calico-kube-controllers-5877564c64-ssm6r" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" Jan 28 04:13:06.844485 containerd[1648]: 2026-01-28 04:13:06.808 [INFO][5095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab8f3622b48 ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Namespace="calico-system" Pod="calico-kube-controllers-5877564c64-ssm6r" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" Jan 28 04:13:06.844485 containerd[1648]: 2026-01-28 04:13:06.815 [INFO][5095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Namespace="calico-system" Pod="calico-kube-controllers-5877564c64-ssm6r" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" Jan 28 04:13:06.844485 containerd[1648]: 2026-01-28 04:13:06.816 [INFO][5095] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Namespace="calico-system" Pod="calico-kube-controllers-5877564c64-ssm6r" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0", GenerateName:"calico-kube-controllers-5877564c64-", Namespace:"calico-system", SelfLink:"", UID:"53f99505-aca3-4278-8799-01f0eba5681f", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 4, 12, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5877564c64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3avyi.gb1.brightbox.com", ContainerID:"6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019", Pod:"calico-kube-controllers-5877564c64-ssm6r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab8f3622b48", MAC:"da:b6:f0:2d:97:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 04:13:06.844485 containerd[1648]: 2026-01-28 04:13:06.833 [INFO][5095] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" Namespace="calico-system" Pod="calico-kube-controllers-5877564c64-ssm6r" WorkloadEndpoint="srv--3avyi.gb1.brightbox.com-k8s-calico--kube--controllers--5877564c64--ssm6r-eth0" Jan 28 04:13:06.888075 kernel: kauditd_printk_skb: 208 callbacks suppressed Jan 28 04:13:06.895808 kernel: audit: type=1325 audit(1769573586.876:746): table=filter:141 family=2 entries=60 op=nft_register_chain pid=5127 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:06.895867 kernel: audit: type=1300 audit(1769573586.876:746): arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7ffd69ee66b0 a2=0 a3=7ffd69ee669c items=0 ppid=4330 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.895932 kernel: audit: type=1327 audit(1769573586.876:746): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:06.876000 audit[5127]: NETFILTER_CFG table=filter:141 family=2 entries=60 op=nft_register_chain pid=5127 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 04:13:06.876000 audit[5127]: SYSCALL arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7ffd69ee66b0 a2=0 a3=7ffd69ee669c items=0 ppid=4330 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.876000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 04:13:06.905108 containerd[1648]: time="2026-01-28T04:13:06.905022615Z" level=info msg="connecting to shim 6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019" address="unix:///run/containerd/s/11664c71fbaa3f0c1150ee638774b6d974adc025f53a3539ada4a21caf64f214" namespace=k8s.io protocol=ttrpc version=3 Jan 28 04:13:06.907839 systemd-networkd[1551]: cali26ab0c5260e: Gained IPv6LL Jan 28 04:13:06.954535 systemd[1]: Started cri-containerd-6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019.scope - libcontainer container 6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019. Jan 28 04:13:06.976000 audit: BPF prog-id=260 op=LOAD Jan 28 04:13:06.979293 kernel: audit: type=1334 audit(1769573586.976:747): prog-id=260 op=LOAD Jan 28 04:13:06.978000 audit: BPF prog-id=261 op=LOAD Jan 28 04:13:06.978000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.983364 kernel: audit: type=1334 audit(1769573586.978:748): prog-id=261 op=LOAD Jan 28 04:13:06.983445 kernel: audit: type=1300 audit(1769573586.978:748): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:06.988330 kernel: audit: type=1327 audit(1769573586.978:748): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:06.978000 audit: BPF prog-id=261 op=UNLOAD Jan 28 04:13:06.992812 kernel: audit: type=1334 audit(1769573586.978:749): prog-id=261 op=UNLOAD Jan 28 04:13:06.992878 kernel: audit: type=1300 audit(1769573586.978:749): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.978000 audit[5148]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:06.999377 kernel: audit: type=1327 audit(1769573586.978:749): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:06.980000 audit: BPF prog-id=262 op=LOAD Jan 28 04:13:06.980000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:06.980000 audit: BPF prog-id=263 op=LOAD Jan 28 04:13:06.980000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:06.980000 audit: BPF prog-id=263 op=UNLOAD Jan 28 04:13:06.980000 audit[5148]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:06.980000 audit: BPF prog-id=262 op=UNLOAD Jan 28 04:13:06.980000 audit[5148]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:06.980000 audit: BPF prog-id=264 op=LOAD Jan 28 04:13:06.980000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5136 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:06.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665633332336439376561343961353266303037313165393431333434 Jan 28 04:13:07.056535 containerd[1648]: time="2026-01-28T04:13:07.056459826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5877564c64-ssm6r,Uid:53f99505-aca3-4278-8799-01f0eba5681f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ec323d97ea49a52f00711e94134488ea6a71f8a3471876f031a5138aa6f2019\"" Jan 28 04:13:07.060485 containerd[1648]: time="2026-01-28T04:13:07.060451670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 04:13:07.169924 kubelet[2950]: E0128 04:13:07.169740 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:13:07.288000 audit[5177]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5177 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:07.288000 audit[5177]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe56e74f00 a2=0 a3=7ffe56e74eec items=0 ppid=3100 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:07.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:07.303000 audit[5177]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=5177 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:13:07.303000 audit[5177]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe56e74f00 a2=0 a3=7ffe56e74eec items=0 ppid=3100 pid=5177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:07.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:13:07.368317 containerd[1648]: time="2026-01-28T04:13:07.368059175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:07.369737 containerd[1648]: time="2026-01-28T04:13:07.369592669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 04:13:07.369737 containerd[1648]: time="2026-01-28T04:13:07.369609858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:07.370172 kubelet[2950]: E0128 04:13:07.370108 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 04:13:07.370319 kubelet[2950]: E0128 04:13:07.370207 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 04:13:07.370523 kubelet[2950]: E0128 04:13:07.370452 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j65ns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5877564c64-ssm6r_calico-system(53f99505-aca3-4278-8799-01f0eba5681f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:07.372011 kubelet[2950]: E0128 04:13:07.371940 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:13:07.422367 systemd-networkd[1551]: cali7f5aff1ad63: Gained IPv6LL Jan 28 04:13:08.059505 systemd-networkd[1551]: caliab8f3622b48: Gained IPv6LL Jan 28 04:13:08.171773 kubelet[2950]: E0128 04:13:08.171708 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:13:09.614623 containerd[1648]: time="2026-01-28T04:13:09.614403045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 04:13:09.944873 containerd[1648]: time="2026-01-28T04:13:09.944758884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:09.993367 containerd[1648]: time="2026-01-28T04:13:09.993126548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 04:13:09.993367 containerd[1648]: time="2026-01-28T04:13:09.993332610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:09.994174 kubelet[2950]: E0128 04:13:09.994094 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 04:13:09.994766 kubelet[2950]: E0128 04:13:09.994190 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 04:13:09.994766 kubelet[2950]: E0128 04:13:09.994411 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e1e1e2ca0574df88a00a87ecf91f97d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6stnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-557cdf66d6-zbq82_calico-system(597d28d3-837d-4a2a-8aed-8b9a166157ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:09.997757 containerd[1648]: time="2026-01-28T04:13:09.997682905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 04:13:10.358733 containerd[1648]: time="2026-01-28T04:13:10.358481844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:10.440635 containerd[1648]: time="2026-01-28T04:13:10.440480382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 04:13:10.440948 containerd[1648]: time="2026-01-28T04:13:10.440697758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:10.441403 kubelet[2950]: E0128 04:13:10.441341 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 04:13:10.441568 kubelet[2950]: E0128 04:13:10.441465 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 04:13:10.443353 kubelet[2950]: E0128 04:13:10.441681 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6stnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-557cdf66d6-zbq82_calico-system(597d28d3-837d-4a2a-8aed-8b9a166157ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:10.443543 kubelet[2950]: E0128 04:13:10.443477 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:13:17.618213 containerd[1648]: time="2026-01-28T04:13:17.617843390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 04:13:17.939051 containerd[1648]: time="2026-01-28T04:13:17.938955945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:17.940532 containerd[1648]: time="2026-01-28T04:13:17.940414679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 04:13:17.940532 containerd[1648]: time="2026-01-28T04:13:17.940485481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:17.941001 kubelet[2950]: E0128 04:13:17.940930 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 04:13:17.943085 kubelet[2950]: E0128 04:13:17.941531 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 04:13:17.943085 kubelet[2950]: E0128 04:13:17.941895 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:17.943330 containerd[1648]: time="2026-01-28T04:13:17.941960978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 04:13:18.246698 containerd[1648]: time="2026-01-28T04:13:18.246356562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:18.251383 containerd[1648]: time="2026-01-28T04:13:18.251194044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 04:13:18.251383 containerd[1648]: time="2026-01-28T04:13:18.251222595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:18.251523 kubelet[2950]: E0128 04:13:18.251468 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:18.251602 kubelet[2950]: E0128 04:13:18.251527 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:18.251956 kubelet[2950]: E0128 04:13:18.251807 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6p6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67db4dc4b5-8mhgz_calico-apiserver(ab23ab24-7e12-4864-a3ee-8b4882a74a22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:18.254426 kubelet[2950]: E0128 04:13:18.254392 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:13:18.254927 containerd[1648]: time="2026-01-28T04:13:18.254716705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 04:13:18.564163 containerd[1648]: time="2026-01-28T04:13:18.563984832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:18.565828 containerd[1648]: time="2026-01-28T04:13:18.565529729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 04:13:18.565828 containerd[1648]: time="2026-01-28T04:13:18.565582734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:18.566175 kubelet[2950]: E0128 04:13:18.566125 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 04:13:18.566343 kubelet[2950]: E0128 04:13:18.566191 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 04:13:18.566797 kubelet[2950]: E0128 04:13:18.566722 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6z88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xf9j7_calico-system(d08f7533-ee5b-4a11-b707-6aef7c12a55d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:18.567917 containerd[1648]: time="2026-01-28T04:13:18.567882232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 04:13:18.568470 kubelet[2950]: E0128 04:13:18.568404 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:13:18.872454 containerd[1648]: time="2026-01-28T04:13:18.872221176Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:18.873451 containerd[1648]: time="2026-01-28T04:13:18.873397253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 04:13:18.873536 containerd[1648]: time="2026-01-28T04:13:18.873504071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:18.874089 kubelet[2950]: E0128 04:13:18.873777 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 04:13:18.874089 kubelet[2950]: E0128 04:13:18.873842 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 04:13:18.874089 kubelet[2950]: E0128 04:13:18.874018 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:18.875285 kubelet[2950]: E0128 04:13:18.875203 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:13:20.614447 containerd[1648]: time="2026-01-28T04:13:20.614308637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 04:13:20.936805 containerd[1648]: time="2026-01-28T04:13:20.936711068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:20.939581 containerd[1648]: time="2026-01-28T04:13:20.939506205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 04:13:20.939816 containerd[1648]: time="2026-01-28T04:13:20.939616762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:20.940102 kubelet[2950]: E0128 04:13:20.940046 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 04:13:20.941350 kubelet[2950]: E0128 04:13:20.940121 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 04:13:20.941350 kubelet[2950]: E0128 04:13:20.940350 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j65ns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5877564c64-ssm6r_calico-system(53f99505-aca3-4278-8799-01f0eba5681f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:20.942020 kubelet[2950]: E0128 04:13:20.941965 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:13:21.615115 containerd[1648]: time="2026-01-28T04:13:21.614770422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 04:13:21.615715 kubelet[2950]: E0128 04:13:21.614941 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:13:21.944633 containerd[1648]: time="2026-01-28T04:13:21.944420420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:21.954242 containerd[1648]: time="2026-01-28T04:13:21.954177854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 04:13:21.955082 containerd[1648]: time="2026-01-28T04:13:21.955043802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:21.961201 kubelet[2950]: E0128 04:13:21.960006 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:21.961201 kubelet[2950]: E0128 04:13:21.960078 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:21.962173 kubelet[2950]: E0128 04:13:21.960231 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r55sg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67db4dc4b5-cstcv_calico-apiserver(a75b9d4b-fd28-4515-89fe-b1c194b4eb55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:21.962369 kubelet[2950]: E0128 04:13:21.962331 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:13:22.498136 systemd[1]: Started sshd@9-10.230.66.102:22-4.153.228.146:51692.service - OpenSSH per-connection server daemon (4.153.228.146:51692). Jan 28 04:13:22.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.66.102:22-4.153.228.146:51692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:22.505211 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 28 04:13:22.505511 kernel: audit: type=1130 audit(1769573602.497:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.66.102:22-4.153.228.146:51692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:23.134000 audit[5198]: USER_ACCT pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:23.144251 sshd[5198]: Accepted publickey for core from 4.153.228.146 port 51692 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:13:23.146000 audit[5198]: CRED_ACQ pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:23.150279 kernel: audit: type=1101 audit(1769573603.134:758): pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:23.150417 kernel: audit: type=1103 audit(1769573603.146:759): pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:23.151065 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:13:23.146000 audit[5198]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdbcff310 a2=3 a3=0 items=0 ppid=1 pid=5198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:23.158403 kernel: audit: type=1006 audit(1769573603.146:760): pid=5198 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 28 04:13:23.158471 kernel: audit: type=1300 audit(1769573603.146:760): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdbcff310 a2=3 a3=0 items=0 ppid=1 pid=5198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:23.163092 kernel: audit: type=1327 audit(1769573603.146:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:23.146000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:23.169482 systemd-logind[1617]: New session 13 of user core. Jan 28 04:13:23.174558 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 04:13:23.179000 audit[5198]: USER_START pid=5198 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:23.187382 kernel: audit: type=1105 audit(1769573603.179:761): pid=5198 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:23.188000 audit[5202]: CRED_ACQ pid=5202 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:23.194292 kernel: audit: type=1103 audit(1769573603.188:762): pid=5202 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:24.029039 sshd[5202]: Connection closed by 4.153.228.146 port 51692 Jan 28 04:13:24.029601 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Jan 28 04:13:24.036000 audit[5198]: USER_END pid=5198 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:24.048818 systemd-logind[1617]: Session 13 logged out. Waiting for processes to exit. Jan 28 04:13:24.036000 audit[5198]: CRED_DISP pid=5198 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:24.051962 kernel: audit: type=1106 audit(1769573604.036:763): pid=5198 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:24.052081 kernel: audit: type=1104 audit(1769573604.036:764): pid=5198 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:24.053315 systemd[1]: sshd@9-10.230.66.102:22-4.153.228.146:51692.service: Deactivated successfully. Jan 28 04:13:24.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.66.102:22-4.153.228.146:51692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:24.058883 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 04:13:24.063070 systemd-logind[1617]: Removed session 13. Jan 28 04:13:29.138634 systemd[1]: Started sshd@10-10.230.66.102:22-4.153.228.146:53874.service - OpenSSH per-connection server daemon (4.153.228.146:53874). Jan 28 04:13:29.143533 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 04:13:29.143921 kernel: audit: type=1130 audit(1769573609.137:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.66.102:22-4.153.228.146:53874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:29.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.66.102:22-4.153.228.146:53874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:29.681000 audit[5245]: USER_ACCT pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:29.687275 sshd[5245]: Accepted publickey for core from 4.153.228.146 port 53874 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:13:29.684000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:29.689636 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:13:29.689983 kernel: audit: type=1101 audit(1769573609.681:767): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:29.690412 kernel: audit: type=1103 audit(1769573609.684:768): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:29.694230 kernel: audit: type=1006 audit(1769573609.687:769): pid=5245 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 28 04:13:29.687000 audit[5245]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc841aaac0 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:29.687000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:29.703511 kernel: audit: type=1300 audit(1769573609.687:769): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc841aaac0 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:29.703603 kernel: audit: type=1327 audit(1769573609.687:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:29.708187 systemd-logind[1617]: New session 14 of user core. Jan 28 04:13:29.720893 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 04:13:29.726000 audit[5245]: USER_START pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:29.730000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:29.736374 kernel: audit: type=1105 audit(1769573609.726:770): pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:29.736458 kernel: audit: type=1103 audit(1769573609.730:771): pid=5249 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:30.084109 sshd[5249]: Connection closed by 4.153.228.146 port 53874 Jan 28 04:13:30.085056 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Jan 28 04:13:30.086000 audit[5245]: USER_END pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:30.086000 audit[5245]: CRED_DISP pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:30.095847 kernel: audit: type=1106 audit(1769573610.086:772): pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:30.095923 kernel: audit: type=1104 audit(1769573610.086:773): pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:30.096867 systemd-logind[1617]: Session 14 logged out. Waiting for processes to exit. Jan 28 04:13:30.100939 systemd[1]: sshd@10-10.230.66.102:22-4.153.228.146:53874.service: Deactivated successfully. Jan 28 04:13:30.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.66.102:22-4.153.228.146:53874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:30.104733 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 04:13:30.107426 systemd-logind[1617]: Removed session 14. Jan 28 04:13:31.615726 kubelet[2950]: E0128 04:13:31.615190 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:13:32.616146 kubelet[2950]: E0128 04:13:32.615945 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:13:32.618069 kubelet[2950]: E0128 04:13:32.616663 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:13:34.617338 kubelet[2950]: E0128 04:13:34.617236 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:13:35.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.66.102:22-4.153.228.146:55800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:35.199727 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 04:13:35.199861 kernel: audit: type=1130 audit(1769573615.192:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.66.102:22-4.153.228.146:55800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:35.192642 systemd[1]: Started sshd@11-10.230.66.102:22-4.153.228.146:55800.service - OpenSSH per-connection server daemon (4.153.228.146:55800). Jan 28 04:13:35.619079 kubelet[2950]: E0128 04:13:35.617699 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:13:35.719000 audit[5266]: USER_ACCT pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:35.721106 sshd[5266]: Accepted publickey for core from 4.153.228.146 port 55800 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:13:35.724018 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:13:35.721000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:35.726705 kernel: audit: type=1101 audit(1769573615.719:776): pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:35.726813 kernel: audit: type=1103 audit(1769573615.721:777): pid=5266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:35.731097 kernel: audit: type=1006 audit(1769573615.722:778): pid=5266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 28 04:13:35.722000 audit[5266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3746e2d0 a2=3 a3=0 items=0 ppid=1 pid=5266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:35.740205 kernel: audit: type=1300 audit(1769573615.722:778): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3746e2d0 a2=3 a3=0 items=0 ppid=1 pid=5266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:35.722000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:35.745365 kernel: audit: type=1327 audit(1769573615.722:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:35.748363 systemd-logind[1617]: New session 15 of user core. Jan 28 04:13:35.762574 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 04:13:35.768000 audit[5266]: USER_START pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:35.771000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:35.776394 kernel: audit: type=1105 audit(1769573615.768:779): pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:35.776505 kernel: audit: type=1103 audit(1769573615.771:780): pid=5270 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:36.113336 sshd[5270]: Connection closed by 4.153.228.146 port 55800 Jan 28 04:13:36.114525 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Jan 28 04:13:36.116000 audit[5266]: USER_END pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:36.130302 kernel: audit: type=1106 audit(1769573616.116:781): pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:36.116000 audit[5266]: CRED_DISP pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:36.131847 systemd[1]: sshd@11-10.230.66.102:22-4.153.228.146:55800.service: Deactivated successfully. Jan 28 04:13:36.135366 kernel: audit: type=1104 audit(1769573616.116:782): pid=5266 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:36.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.66.102:22-4.153.228.146:55800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:36.137154 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 04:13:36.140353 systemd-logind[1617]: Session 15 logged out. Waiting for processes to exit. Jan 28 04:13:36.143862 systemd-logind[1617]: Removed session 15. Jan 28 04:13:36.618104 containerd[1648]: time="2026-01-28T04:13:36.617754071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 04:13:36.948330 containerd[1648]: time="2026-01-28T04:13:36.948237546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:36.949445 containerd[1648]: time="2026-01-28T04:13:36.949387722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 04:13:36.949753 containerd[1648]: time="2026-01-28T04:13:36.949517005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:36.950012 kubelet[2950]: E0128 04:13:36.949845 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 04:13:36.950012 kubelet[2950]: E0128 04:13:36.949953 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 04:13:36.950579 kubelet[2950]: E0128 04:13:36.950403 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e1e1e2ca0574df88a00a87ecf91f97d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6stnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-557cdf66d6-zbq82_calico-system(597d28d3-837d-4a2a-8aed-8b9a166157ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:36.953479 containerd[1648]: time="2026-01-28T04:13:36.953445819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 04:13:37.279431 containerd[1648]: time="2026-01-28T04:13:37.279160918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:37.280599 containerd[1648]: time="2026-01-28T04:13:37.280538478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 04:13:37.280599 containerd[1648]: time="2026-01-28T04:13:37.280550826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:37.281052 kubelet[2950]: E0128 04:13:37.280989 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 04:13:37.281226 kubelet[2950]: E0128 04:13:37.281198 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 04:13:37.281592 kubelet[2950]: E0128 04:13:37.281508 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6stnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-557cdf66d6-zbq82_calico-system(597d28d3-837d-4a2a-8aed-8b9a166157ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:37.283002 kubelet[2950]: E0128 04:13:37.282933 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:13:41.226081 systemd[1]: Started sshd@12-10.230.66.102:22-4.153.228.146:55812.service - OpenSSH per-connection server daemon (4.153.228.146:55812). Jan 28 04:13:41.243768 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 04:13:41.243888 kernel: audit: type=1130 audit(1769573621.225:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.66.102:22-4.153.228.146:55812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:41.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.66.102:22-4.153.228.146:55812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:41.783000 audit[5291]: USER_ACCT pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:41.788914 sshd[5291]: Accepted publickey for core from 4.153.228.146 port 55812 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:13:41.790310 kernel: audit: type=1101 audit(1769573621.783:785): pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:41.790000 audit[5291]: CRED_ACQ pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:41.794015 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:13:41.797537 kernel: audit: type=1103 audit(1769573621.790:786): pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:41.810640 kernel: audit: type=1006 audit(1769573621.790:787): pid=5291 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 28 04:13:41.810901 kernel: audit: type=1300 audit(1769573621.790:787): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc3395300 a2=3 a3=0 items=0 ppid=1 pid=5291 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:41.790000 audit[5291]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc3395300 a2=3 a3=0 items=0 ppid=1 pid=5291 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:41.790000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:41.817047 kernel: audit: type=1327 audit(1769573621.790:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:41.826396 systemd-logind[1617]: New session 16 of user core. Jan 28 04:13:41.834568 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 04:13:41.840000 audit[5291]: USER_START pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:41.848319 kernel: audit: type=1105 audit(1769573621.840:788): pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:41.849000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:41.855302 kernel: audit: type=1103 audit(1769573621.849:789): pid=5295 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:42.191296 sshd[5295]: Connection closed by 4.153.228.146 port 55812 Jan 28 04:13:42.192577 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Jan 28 04:13:42.194000 audit[5291]: USER_END pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:42.194000 audit[5291]: CRED_DISP pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:42.203750 systemd[1]: sshd@12-10.230.66.102:22-4.153.228.146:55812.service: Deactivated successfully. Jan 28 04:13:42.203946 kernel: audit: type=1106 audit(1769573622.194:790): pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:42.204032 kernel: audit: type=1104 audit(1769573622.194:791): pid=5291 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:42.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.66.102:22-4.153.228.146:55812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:42.211568 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 04:13:42.214648 systemd-logind[1617]: Session 16 logged out. Waiting for processes to exit. Jan 28 04:13:42.219081 systemd-logind[1617]: Removed session 16. Jan 28 04:13:42.302470 systemd[1]: Started sshd@13-10.230.66.102:22-4.153.228.146:55816.service - OpenSSH per-connection server daemon (4.153.228.146:55816). Jan 28 04:13:42.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.66.102:22-4.153.228.146:55816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:42.817000 audit[5307]: USER_ACCT pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:42.820236 sshd[5307]: Accepted publickey for core from 4.153.228.146 port 55816 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:13:42.818000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:42.819000 audit[5307]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbb398380 a2=3 a3=0 items=0 ppid=1 pid=5307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:42.819000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:42.821987 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:13:42.833628 systemd-logind[1617]: New session 17 of user core. Jan 28 04:13:42.838178 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 04:13:42.843000 audit[5307]: USER_START pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:42.846000 audit[5311]: CRED_ACQ pid=5311 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:43.262753 sshd[5311]: Connection closed by 4.153.228.146 port 55816 Jan 28 04:13:43.263752 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Jan 28 04:13:43.266000 audit[5307]: USER_END pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:43.266000 audit[5307]: CRED_DISP pid=5307 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:43.272017 systemd[1]: sshd@13-10.230.66.102:22-4.153.228.146:55816.service: Deactivated successfully. Jan 28 04:13:43.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.66.102:22-4.153.228.146:55816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:43.276188 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 04:13:43.278121 systemd-logind[1617]: Session 17 logged out. Waiting for processes to exit. Jan 28 04:13:43.280470 systemd-logind[1617]: Removed session 17. Jan 28 04:13:43.376848 systemd[1]: Started sshd@14-10.230.66.102:22-4.153.228.146:55826.service - OpenSSH per-connection server daemon (4.153.228.146:55826). Jan 28 04:13:43.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.66.102:22-4.153.228.146:55826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:43.903000 audit[5320]: USER_ACCT pid=5320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:43.905032 sshd[5320]: Accepted publickey for core from 4.153.228.146 port 55826 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:13:43.905000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:43.906000 audit[5320]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd75246450 a2=3 a3=0 items=0 ppid=1 pid=5320 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:43.906000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:43.909104 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:13:43.919872 systemd-logind[1617]: New session 18 of user core. Jan 28 04:13:43.928589 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 04:13:43.932000 audit[5320]: USER_START pid=5320 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:43.936000 audit[5324]: CRED_ACQ pid=5324 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:44.283611 sshd[5324]: Connection closed by 4.153.228.146 port 55826 Jan 28 04:13:44.284856 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Jan 28 04:13:44.286000 audit[5320]: USER_END pid=5320 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:44.287000 audit[5320]: CRED_DISP pid=5320 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:44.295711 systemd[1]: sshd@14-10.230.66.102:22-4.153.228.146:55826.service: Deactivated successfully. Jan 28 04:13:44.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.66.102:22-4.153.228.146:55826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:44.299168 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 04:13:44.301022 systemd-logind[1617]: Session 18 logged out. Waiting for processes to exit. Jan 28 04:13:44.303229 systemd-logind[1617]: Removed session 18. Jan 28 04:13:45.615587 containerd[1648]: time="2026-01-28T04:13:45.614494487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 04:13:45.925683 containerd[1648]: time="2026-01-28T04:13:45.925585986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:45.927278 containerd[1648]: time="2026-01-28T04:13:45.927189477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 04:13:45.927278 containerd[1648]: time="2026-01-28T04:13:45.927238155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:45.927852 kubelet[2950]: E0128 04:13:45.927784 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:45.928567 kubelet[2950]: E0128 04:13:45.927876 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:45.928567 kubelet[2950]: E0128 04:13:45.928154 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6p6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67db4dc4b5-8mhgz_calico-apiserver(ab23ab24-7e12-4864-a3ee-8b4882a74a22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:45.929864 kubelet[2950]: E0128 04:13:45.929826 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:13:46.617314 containerd[1648]: time="2026-01-28T04:13:46.616162934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 04:13:46.928711 containerd[1648]: time="2026-01-28T04:13:46.928491795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:46.929713 containerd[1648]: time="2026-01-28T04:13:46.929663503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 04:13:46.932296 kubelet[2950]: E0128 04:13:46.930448 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 04:13:46.932296 kubelet[2950]: E0128 04:13:46.930520 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 04:13:46.932296 kubelet[2950]: E0128 04:13:46.930721 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j65ns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5877564c64-ssm6r_calico-system(53f99505-aca3-4278-8799-01f0eba5681f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:46.932296 kubelet[2950]: E0128 04:13:46.931901 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:13:46.951470 containerd[1648]: time="2026-01-28T04:13:46.929823520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:47.615025 containerd[1648]: time="2026-01-28T04:13:47.614126056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 04:13:48.088853 containerd[1648]: time="2026-01-28T04:13:48.088573094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:48.090144 containerd[1648]: time="2026-01-28T04:13:48.090006406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 04:13:48.090144 containerd[1648]: time="2026-01-28T04:13:48.090056728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:48.090428 kubelet[2950]: E0128 04:13:48.090364 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 04:13:48.091285 kubelet[2950]: E0128 04:13:48.090446 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 04:13:48.091285 kubelet[2950]: E0128 04:13:48.090946 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6z88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xf9j7_calico-system(d08f7533-ee5b-4a11-b707-6aef7c12a55d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:48.091536 containerd[1648]: time="2026-01-28T04:13:48.091331354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 04:13:48.093161 kubelet[2950]: E0128 04:13:48.093040 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:13:48.393492 containerd[1648]: time="2026-01-28T04:13:48.392905894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:48.394575 containerd[1648]: time="2026-01-28T04:13:48.394473945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 04:13:48.394730 containerd[1648]: time="2026-01-28T04:13:48.394515254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:48.395558 kubelet[2950]: E0128 04:13:48.395461 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:48.395714 kubelet[2950]: E0128 04:13:48.395580 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:13:48.396670 kubelet[2950]: E0128 04:13:48.396567 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r55sg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67db4dc4b5-cstcv_calico-apiserver(a75b9d4b-fd28-4515-89fe-b1c194b4eb55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:48.398361 kubelet[2950]: E0128 04:13:48.398124 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:13:48.616963 kubelet[2950]: E0128 04:13:48.616602 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:13:48.617669 containerd[1648]: time="2026-01-28T04:13:48.617606342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 04:13:48.937077 containerd[1648]: time="2026-01-28T04:13:48.936754594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:48.939296 containerd[1648]: time="2026-01-28T04:13:48.939215239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 04:13:48.939513 containerd[1648]: time="2026-01-28T04:13:48.939431600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:48.940330 kubelet[2950]: E0128 04:13:48.940093 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 04:13:48.940330 kubelet[2950]: E0128 04:13:48.940229 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 04:13:48.943350 kubelet[2950]: E0128 04:13:48.943210 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:48.948847 containerd[1648]: time="2026-01-28T04:13:48.948808926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 04:13:49.258437 containerd[1648]: time="2026-01-28T04:13:49.258146532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:13:49.260316 containerd[1648]: time="2026-01-28T04:13:49.260229562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 04:13:49.260470 containerd[1648]: time="2026-01-28T04:13:49.260269630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 04:13:49.260853 kubelet[2950]: E0128 04:13:49.260714 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 04:13:49.261307 kubelet[2950]: E0128 04:13:49.260885 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 04:13:49.261307 kubelet[2950]: E0128 04:13:49.261142 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 04:13:49.262858 kubelet[2950]: E0128 04:13:49.262815 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:13:49.386813 systemd[1]: Started sshd@15-10.230.66.102:22-4.153.228.146:56858.service - OpenSSH per-connection server daemon (4.153.228.146:56858). Jan 28 04:13:49.401445 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 04:13:49.401667 kernel: audit: type=1130 audit(1769573629.386:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.66.102:22-4.153.228.146:56858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:49.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.66.102:22-4.153.228.146:56858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:49.928000 audit[5338]: USER_ACCT pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:49.933245 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:13:49.940640 kernel: audit: type=1101 audit(1769573629.928:812): pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:49.940751 kernel: audit: type=1103 audit(1769573629.930:813): pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:49.930000 audit[5338]: CRED_ACQ pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:49.940939 sshd[5338]: Accepted publickey for core from 4.153.228.146 port 56858 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:13:49.946175 kernel: audit: type=1006 audit(1769573629.930:814): pid=5338 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 28 04:13:49.930000 audit[5338]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad48adc0 a2=3 a3=0 items=0 ppid=1 pid=5338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:49.930000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:49.954478 kernel: audit: type=1300 audit(1769573629.930:814): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffad48adc0 a2=3 a3=0 items=0 ppid=1 pid=5338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:49.954557 kernel: audit: type=1327 audit(1769573629.930:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:49.961525 systemd-logind[1617]: New session 19 of user core. Jan 28 04:13:49.966585 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 04:13:49.972000 audit[5338]: USER_START pid=5338 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:49.977000 audit[5342]: CRED_ACQ pid=5342 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:49.981656 kernel: audit: type=1105 audit(1769573629.972:815): pid=5338 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:49.981774 kernel: audit: type=1103 audit(1769573629.977:816): pid=5342 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:50.321231 sshd[5342]: Connection closed by 4.153.228.146 port 56858 Jan 28 04:13:50.322785 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Jan 28 04:13:50.325000 audit[5338]: USER_END pid=5338 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:50.327000 audit[5338]: CRED_DISP pid=5338 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:50.334415 kernel: audit: type=1106 audit(1769573630.325:817): pid=5338 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:50.334520 kernel: audit: type=1104 audit(1769573630.327:818): pid=5338 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:50.340580 systemd[1]: sshd@15-10.230.66.102:22-4.153.228.146:56858.service: Deactivated successfully. Jan 28 04:13:50.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.66.102:22-4.153.228.146:56858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:50.348409 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 04:13:50.350294 systemd-logind[1617]: Session 19 logged out. Waiting for processes to exit. Jan 28 04:13:50.353504 systemd-logind[1617]: Removed session 19. Jan 28 04:13:55.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.66.102:22-4.153.228.146:35974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:55.434825 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 04:13:55.434951 kernel: audit: type=1130 audit(1769573635.422:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.66.102:22-4.153.228.146:35974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:55.422209 systemd[1]: Started sshd@16-10.230.66.102:22-4.153.228.146:35974.service - OpenSSH per-connection server daemon (4.153.228.146:35974). Jan 28 04:13:55.940000 audit[5385]: USER_ACCT pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:55.944119 sshd[5385]: Accepted publickey for core from 4.153.228.146 port 35974 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:13:55.945000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:55.947181 kernel: audit: type=1101 audit(1769573635.940:821): pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:55.947249 kernel: audit: type=1103 audit(1769573635.945:822): pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:55.947946 sshd-session[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:13:55.945000 audit[5385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6bd240e0 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:55.955247 kernel: audit: type=1006 audit(1769573635.945:823): pid=5385 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 28 04:13:55.955377 kernel: audit: type=1300 audit(1769573635.945:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6bd240e0 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:13:55.945000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:55.961380 kernel: audit: type=1327 audit(1769573635.945:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:13:55.964009 systemd-logind[1617]: New session 20 of user core. Jan 28 04:13:55.973582 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 04:13:55.979000 audit[5385]: USER_START pid=5385 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:55.983000 audit[5389]: CRED_ACQ pid=5389 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:55.987647 kernel: audit: type=1105 audit(1769573635.979:824): pid=5385 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:55.988519 kernel: audit: type=1103 audit(1769573635.983:825): pid=5389 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:56.312056 sshd[5389]: Connection closed by 4.153.228.146 port 35974 Jan 28 04:13:56.313505 sshd-session[5385]: pam_unix(sshd:session): session closed for user core Jan 28 04:13:56.318000 audit[5385]: USER_END pid=5385 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:56.322927 systemd[1]: sshd@16-10.230.66.102:22-4.153.228.146:35974.service: Deactivated successfully. Jan 28 04:13:56.325304 kernel: audit: type=1106 audit(1769573636.318:826): pid=5385 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:56.327862 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 04:13:56.318000 audit[5385]: CRED_DISP pid=5385 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:56.332716 systemd-logind[1617]: Session 20 logged out. Waiting for processes to exit. Jan 28 04:13:56.335060 kernel: audit: type=1104 audit(1769573636.318:827): pid=5385 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:13:56.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.66.102:22-4.153.228.146:35974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:13:56.336611 systemd-logind[1617]: Removed session 20. Jan 28 04:13:58.615850 kubelet[2950]: E0128 04:13:58.615621 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:13:59.618106 kubelet[2950]: E0128 04:13:59.618013 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:14:00.616115 kubelet[2950]: E0128 04:14:00.615444 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:14:00.619598 kubelet[2950]: E0128 04:14:00.619153 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:14:01.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.66.102:22-4.153.228.146:35988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:01.421229 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 04:14:01.421440 kernel: audit: type=1130 audit(1769573641.414:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.66.102:22-4.153.228.146:35988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:01.415777 systemd[1]: Started sshd@17-10.230.66.102:22-4.153.228.146:35988.service - OpenSSH per-connection server daemon (4.153.228.146:35988). Jan 28 04:14:01.938000 audit[5400]: USER_ACCT pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:01.941519 sshd[5400]: Accepted publickey for core from 4.153.228.146 port 35988 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:01.944124 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:01.945479 kernel: audit: type=1101 audit(1769573641.938:830): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:01.945552 kernel: audit: type=1103 audit(1769573641.941:831): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:01.941000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:01.950308 kernel: audit: type=1006 audit(1769573641.941:832): pid=5400 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 28 04:14:01.941000 audit[5400]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd5724cd0 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:01.956276 kernel: audit: type=1300 audit(1769573641.941:832): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd5724cd0 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:01.954536 systemd-logind[1617]: New session 21 of user core. Jan 28 04:14:01.941000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:01.961305 kernel: audit: type=1327 audit(1769573641.941:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:01.962682 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 04:14:01.969000 audit[5400]: USER_START pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:01.977000 audit[5405]: CRED_ACQ pid=5405 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:01.986095 kernel: audit: type=1105 audit(1769573641.969:833): pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:01.986310 kernel: audit: type=1103 audit(1769573641.977:834): pid=5405 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:02.306723 sshd[5405]: Connection closed by 4.153.228.146 port 35988 Jan 28 04:14:02.308604 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:02.309000 audit[5400]: USER_END pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:02.317389 systemd[1]: sshd@17-10.230.66.102:22-4.153.228.146:35988.service: Deactivated successfully. Jan 28 04:14:02.318688 kernel: audit: type=1106 audit(1769573642.309:835): pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:02.318749 kernel: audit: type=1104 audit(1769573642.310:836): pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:02.310000 audit[5400]: CRED_DISP pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:02.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.66.102:22-4.153.228.146:35988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:02.323111 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 04:14:02.326100 systemd-logind[1617]: Session 21 logged out. Waiting for processes to exit. Jan 28 04:14:02.328199 systemd-logind[1617]: Removed session 21. Jan 28 04:14:02.619874 kubelet[2950]: E0128 04:14:02.619708 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:14:02.623001 kubelet[2950]: E0128 04:14:02.622966 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:14:07.409192 systemd[1]: Started sshd@18-10.230.66.102:22-4.153.228.146:59694.service - OpenSSH per-connection server daemon (4.153.228.146:59694). Jan 28 04:14:07.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.66.102:22-4.153.228.146:59694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:07.412596 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 04:14:07.412749 kernel: audit: type=1130 audit(1769573647.408:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.66.102:22-4.153.228.146:59694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:07.933000 audit[5421]: USER_ACCT pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:07.937293 sshd[5421]: Accepted publickey for core from 4.153.228.146 port 59694 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:07.939855 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:07.940317 kernel: audit: type=1101 audit(1769573647.933:839): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:07.935000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:07.945308 kernel: audit: type=1103 audit(1769573647.935:840): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:07.949475 kernel: audit: type=1006 audit(1769573647.936:841): pid=5421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 28 04:14:07.950302 kernel: audit: type=1300 audit(1769573647.936:841): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0e7ffc80 a2=3 a3=0 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:07.936000 audit[5421]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0e7ffc80 a2=3 a3=0 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:07.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:07.956063 kernel: audit: type=1327 audit(1769573647.936:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:07.960645 systemd-logind[1617]: New session 22 of user core. Jan 28 04:14:07.975801 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 04:14:07.981000 audit[5421]: USER_START pid=5421 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:07.985000 audit[5425]: CRED_ACQ pid=5425 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:07.990435 kernel: audit: type=1105 audit(1769573647.981:842): pid=5421 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:07.990580 kernel: audit: type=1103 audit(1769573647.985:843): pid=5425 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:08.332147 sshd[5425]: Connection closed by 4.153.228.146 port 59694 Jan 28 04:14:08.331679 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:08.334000 audit[5421]: USER_END pid=5421 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:08.342338 kernel: audit: type=1106 audit(1769573648.334:844): pid=5421 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:08.342222 systemd[1]: sshd@18-10.230.66.102:22-4.153.228.146:59694.service: Deactivated successfully. Jan 28 04:14:08.335000 audit[5421]: CRED_DISP pid=5421 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:08.347869 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 04:14:08.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.66.102:22-4.153.228.146:59694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:08.348361 kernel: audit: type=1104 audit(1769573648.335:845): pid=5421 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:08.352630 systemd-logind[1617]: Session 22 logged out. Waiting for processes to exit. Jan 28 04:14:08.354768 systemd-logind[1617]: Removed session 22. Jan 28 04:14:08.449631 systemd[1]: Started sshd@19-10.230.66.102:22-4.153.228.146:59698.service - OpenSSH per-connection server daemon (4.153.228.146:59698). Jan 28 04:14:08.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.66.102:22-4.153.228.146:59698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:08.979000 audit[5437]: USER_ACCT pid=5437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:08.981205 sshd[5437]: Accepted publickey for core from 4.153.228.146 port 59698 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:08.981000 audit[5437]: CRED_ACQ pid=5437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:08.981000 audit[5437]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7e77d0d0 a2=3 a3=0 items=0 ppid=1 pid=5437 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:08.981000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:08.984325 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:08.993686 systemd-logind[1617]: New session 23 of user core. Jan 28 04:14:09.003733 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 04:14:09.008000 audit[5437]: USER_START pid=5437 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:09.012000 audit[5441]: CRED_ACQ pid=5441 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:09.847536 sshd[5441]: Connection closed by 4.153.228.146 port 59698 Jan 28 04:14:09.851296 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:09.862000 audit[5437]: USER_END pid=5437 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:09.863000 audit[5437]: CRED_DISP pid=5437 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:09.868136 systemd[1]: sshd@19-10.230.66.102:22-4.153.228.146:59698.service: Deactivated successfully. Jan 28 04:14:09.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.66.102:22-4.153.228.146:59698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:09.873622 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 04:14:09.877601 systemd-logind[1617]: Session 23 logged out. Waiting for processes to exit. Jan 28 04:14:09.879413 systemd-logind[1617]: Removed session 23. Jan 28 04:14:09.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.66.102:22-4.153.228.146:59708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:09.952200 systemd[1]: Started sshd@20-10.230.66.102:22-4.153.228.146:59708.service - OpenSSH per-connection server daemon (4.153.228.146:59708). Jan 28 04:14:10.526000 audit[5451]: USER_ACCT pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:10.528784 sshd[5451]: Accepted publickey for core from 4.153.228.146 port 59708 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:10.528000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:10.528000 audit[5451]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd40d9a100 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:10.528000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:10.532144 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:10.541104 systemd-logind[1617]: New session 24 of user core. Jan 28 04:14:10.551605 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 04:14:10.558000 audit[5451]: USER_START pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:10.561000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:10.617897 kubelet[2950]: E0128 04:14:10.617795 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:14:11.605000 audit[5465]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:14:11.605000 audit[5465]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff0b2106f0 a2=0 a3=7fff0b2106dc items=0 ppid=3100 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:11.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:14:11.611000 audit[5465]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5465 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:14:11.611000 audit[5465]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff0b2106f0 a2=0 a3=0 items=0 ppid=3100 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:11.615693 kubelet[2950]: E0128 04:14:11.615581 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:14:11.611000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:14:11.654000 audit[5467]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:14:11.654000 audit[5467]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd120faed0 a2=0 a3=7ffd120faebc items=0 ppid=3100 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:11.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:14:11.662000 audit[5467]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5467 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:14:11.662000 audit[5467]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd120faed0 a2=0 a3=0 items=0 ppid=3100 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:11.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:14:11.697485 sshd[5455]: Connection closed by 4.153.228.146 port 59708 Jan 28 04:14:11.697975 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:11.701000 audit[5451]: USER_END pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:11.701000 audit[5451]: CRED_DISP pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:11.715369 systemd[1]: sshd@20-10.230.66.102:22-4.153.228.146:59708.service: Deactivated successfully. Jan 28 04:14:11.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.66.102:22-4.153.228.146:59708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:11.716372 systemd-logind[1617]: Session 24 logged out. Waiting for processes to exit. Jan 28 04:14:11.721005 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 04:14:11.724882 systemd-logind[1617]: Removed session 24. Jan 28 04:14:11.797849 systemd[1]: Started sshd@21-10.230.66.102:22-4.153.228.146:59720.service - OpenSSH per-connection server daemon (4.153.228.146:59720). Jan 28 04:14:11.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.66.102:22-4.153.228.146:59720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:12.335000 audit[5472]: USER_ACCT pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:12.338340 sshd[5472]: Accepted publickey for core from 4.153.228.146 port 59720 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:12.337000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:12.338000 audit[5472]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc6a0b410 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:12.338000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:12.340729 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:12.348361 systemd-logind[1617]: New session 25 of user core. Jan 28 04:14:12.358936 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 04:14:12.362000 audit[5472]: USER_START pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:12.366000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:12.954127 sshd[5476]: Connection closed by 4.153.228.146 port 59720 Jan 28 04:14:12.956509 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:12.977318 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 28 04:14:12.977759 kernel: audit: type=1106 audit(1769573652.961:875): pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:12.961000 audit[5472]: USER_END pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:12.983012 systemd[1]: sshd@21-10.230.66.102:22-4.153.228.146:59720.service: Deactivated successfully. Jan 28 04:14:12.986968 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 04:14:12.991002 systemd-logind[1617]: Session 25 logged out. Waiting for processes to exit. Jan 28 04:14:12.961000 audit[5472]: CRED_DISP pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:12.997289 kernel: audit: type=1104 audit(1769573652.961:876): pid=5472 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:12.998384 systemd-logind[1617]: Removed session 25. Jan 28 04:14:12.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.66.102:22-4.153.228.146:59720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:13.004325 kernel: audit: type=1131 audit(1769573652.982:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.66.102:22-4.153.228.146:59720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:13.065644 systemd[1]: Started sshd@22-10.230.66.102:22-4.153.228.146:59734.service - OpenSSH per-connection server daemon (4.153.228.146:59734). Jan 28 04:14:13.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.66.102:22-4.153.228.146:59734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:13.071417 kernel: audit: type=1130 audit(1769573653.064:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.66.102:22-4.153.228.146:59734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:13.614890 kubelet[2950]: E0128 04:14:13.614374 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:14:13.650432 sshd[5486]: Accepted publickey for core from 4.153.228.146 port 59734 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:13.649000 audit[5486]: USER_ACCT pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:13.653981 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:13.651000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:13.657293 kernel: audit: type=1101 audit(1769573653.649:879): pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:13.657378 kernel: audit: type=1103 audit(1769573653.651:880): pid=5486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:13.664294 kernel: audit: type=1006 audit(1769573653.651:881): pid=5486 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 28 04:14:13.664373 kernel: audit: type=1300 audit(1769573653.651:881): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff019492e0 a2=3 a3=0 items=0 ppid=1 pid=5486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:13.651000 audit[5486]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff019492e0 a2=3 a3=0 items=0 ppid=1 pid=5486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:13.667036 systemd-logind[1617]: New session 26 of user core. Jan 28 04:14:13.670938 kernel: audit: type=1327 audit(1769573653.651:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:13.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:13.673520 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 04:14:13.679000 audit[5486]: USER_START pid=5486 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:13.682000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:13.687327 kernel: audit: type=1105 audit(1769573653.679:882): pid=5486 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:14.050365 sshd[5490]: Connection closed by 4.153.228.146 port 59734 Jan 28 04:14:14.051259 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:14.052000 audit[5486]: USER_END pid=5486 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:14.052000 audit[5486]: CRED_DISP pid=5486 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:14.057898 systemd[1]: sshd@22-10.230.66.102:22-4.153.228.146:59734.service: Deactivated successfully. Jan 28 04:14:14.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.66.102:22-4.153.228.146:59734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:14.060901 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 04:14:14.063813 systemd-logind[1617]: Session 26 logged out. Waiting for processes to exit. Jan 28 04:14:14.065683 systemd-logind[1617]: Removed session 26. Jan 28 04:14:14.616119 kubelet[2950]: E0128 04:14:14.615948 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:14:15.614314 kubelet[2950]: E0128 04:14:15.614168 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:14:15.615561 kubelet[2950]: E0128 04:14:15.614383 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:14:19.155465 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 28 04:14:19.155662 kernel: audit: type=1130 audit(1769573659.142:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.66.102:22-4.153.228.146:59544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:19.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.66.102:22-4.153.228.146:59544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:19.143644 systemd[1]: Started sshd@23-10.230.66.102:22-4.153.228.146:59544.service - OpenSSH per-connection server daemon (4.153.228.146:59544). Jan 28 04:14:19.658000 audit[5508]: USER_ACCT pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:19.662996 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:19.660000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:19.665224 sshd[5508]: Accepted publickey for core from 4.153.228.146 port 59544 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:19.666595 kernel: audit: type=1101 audit(1769573659.658:888): pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:19.666689 kernel: audit: type=1103 audit(1769573659.660:889): pid=5508 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:19.660000 audit[5508]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9b033420 a2=3 a3=0 items=0 ppid=1 pid=5508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:19.675675 kernel: audit: type=1006 audit(1769573659.660:890): pid=5508 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 28 04:14:19.675772 kernel: audit: type=1300 audit(1769573659.660:890): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9b033420 a2=3 a3=0 items=0 ppid=1 pid=5508 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:19.660000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:19.680065 kernel: audit: type=1327 audit(1769573659.660:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:19.684357 systemd-logind[1617]: New session 27 of user core. Jan 28 04:14:19.695683 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 04:14:19.701000 audit[5508]: USER_START pid=5508 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:19.708420 kernel: audit: type=1105 audit(1769573659.701:891): pid=5508 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:19.704000 audit[5512]: CRED_ACQ pid=5512 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:19.713379 kernel: audit: type=1103 audit(1769573659.704:892): pid=5512 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:20.021360 sshd[5512]: Connection closed by 4.153.228.146 port 59544 Jan 28 04:14:20.023465 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:20.037370 kernel: audit: type=1106 audit(1769573660.025:893): pid=5508 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:20.025000 audit[5508]: USER_END pid=5508 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:20.038425 systemd[1]: sshd@23-10.230.66.102:22-4.153.228.146:59544.service: Deactivated successfully. Jan 28 04:14:20.044004 kernel: audit: type=1104 audit(1769573660.025:894): pid=5508 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:20.025000 audit[5508]: CRED_DISP pid=5508 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:20.044073 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 04:14:20.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.66.102:22-4.153.228.146:59544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:20.047607 systemd-logind[1617]: Session 27 logged out. Waiting for processes to exit. Jan 28 04:14:20.049380 systemd-logind[1617]: Removed session 27. Jan 28 04:14:23.298000 audit[5523]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:14:23.298000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd0ecba680 a2=0 a3=7ffd0ecba66c items=0 ppid=3100 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:23.298000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:14:23.305000 audit[5523]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 04:14:23.305000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd0ecba680 a2=0 a3=7ffd0ecba66c items=0 ppid=3100 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:23.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 04:14:24.628285 kubelet[2950]: E0128 04:14:24.625194 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:14:24.640956 containerd[1648]: time="2026-01-28T04:14:24.618911497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 04:14:24.953341 containerd[1648]: time="2026-01-28T04:14:24.952845565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:14:24.980923 containerd[1648]: time="2026-01-28T04:14:24.980819062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 04:14:24.981365 containerd[1648]: time="2026-01-28T04:14:24.981161465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 04:14:24.982025 kubelet[2950]: E0128 04:14:24.981850 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 04:14:24.982358 kubelet[2950]: E0128 04:14:24.982145 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 04:14:24.982978 kubelet[2950]: E0128 04:14:24.982768 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3e1e1e2ca0574df88a00a87ecf91f97d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6stnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-557cdf66d6-zbq82_calico-system(597d28d3-837d-4a2a-8aed-8b9a166157ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 04:14:24.985399 containerd[1648]: time="2026-01-28T04:14:24.985344813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 04:14:25.124941 systemd[1]: Started sshd@24-10.230.66.102:22-4.153.228.146:47646.service - OpenSSH per-connection server daemon (4.153.228.146:47646). Jan 28 04:14:25.138771 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 04:14:25.138991 kernel: audit: type=1130 audit(1769573665.124:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.66.102:22-4.153.228.146:47646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:25.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.66.102:22-4.153.228.146:47646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:25.303018 containerd[1648]: time="2026-01-28T04:14:25.302144010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:14:25.304851 containerd[1648]: time="2026-01-28T04:14:25.304630824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 04:14:25.304851 containerd[1648]: time="2026-01-28T04:14:25.304794536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 04:14:25.307246 kubelet[2950]: E0128 04:14:25.305223 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 04:14:25.307246 kubelet[2950]: E0128 04:14:25.306387 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 04:14:25.307246 kubelet[2950]: E0128 04:14:25.306624 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6stnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-557cdf66d6-zbq82_calico-system(597d28d3-837d-4a2a-8aed-8b9a166157ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 04:14:25.307857 kubelet[2950]: E0128 04:14:25.307786 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:14:25.614415 kubelet[2950]: E0128 04:14:25.614049 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:14:25.702000 audit[5549]: USER_ACCT pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:25.725011 kernel: audit: type=1101 audit(1769573665.702:899): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:25.725130 kernel: audit: type=1103 audit(1769573665.714:900): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:25.714000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:25.718082 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:25.730449 sshd[5549]: Accepted publickey for core from 4.153.228.146 port 47646 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:25.733541 kernel: audit: type=1006 audit(1769573665.715:901): pid=5549 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 28 04:14:25.733626 kernel: audit: type=1300 audit(1769573665.715:901): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe009c0bd0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:25.715000 audit[5549]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe009c0bd0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:25.715000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:25.747285 kernel: audit: type=1327 audit(1769573665.715:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:25.753075 systemd-logind[1617]: New session 28 of user core. Jan 28 04:14:25.758665 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 04:14:25.765000 audit[5549]: USER_START pid=5549 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:25.774286 kernel: audit: type=1105 audit(1769573665.765:902): pid=5549 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:25.774000 audit[5553]: CRED_ACQ pid=5553 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:25.782333 kernel: audit: type=1103 audit(1769573665.774:903): pid=5553 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:26.250110 sshd[5553]: Connection closed by 4.153.228.146 port 47646 Jan 28 04:14:26.250054 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:26.251000 audit[5549]: USER_END pid=5549 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:26.268199 kernel: audit: type=1106 audit(1769573666.251:904): pid=5549 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:26.268568 kernel: audit: type=1104 audit(1769573666.251:905): pid=5549 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:26.251000 audit[5549]: CRED_DISP pid=5549 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:26.270316 systemd[1]: sshd@24-10.230.66.102:22-4.153.228.146:47646.service: Deactivated successfully. Jan 28 04:14:26.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.66.102:22-4.153.228.146:47646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:26.277212 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 04:14:26.279155 systemd-logind[1617]: Session 28 logged out. Waiting for processes to exit. Jan 28 04:14:26.284470 systemd-logind[1617]: Removed session 28. Jan 28 04:14:26.615815 kubelet[2950]: E0128 04:14:26.615203 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:14:26.618035 kubelet[2950]: E0128 04:14:26.617815 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:14:30.616331 containerd[1648]: time="2026-01-28T04:14:30.615046565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 04:14:30.995606 containerd[1648]: time="2026-01-28T04:14:30.995465158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:14:30.997195 containerd[1648]: time="2026-01-28T04:14:30.997156054Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 04:14:30.997541 containerd[1648]: time="2026-01-28T04:14:30.997324200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 04:14:30.999074 kubelet[2950]: E0128 04:14:30.997902 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 04:14:30.999074 kubelet[2950]: E0128 04:14:30.997984 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 04:14:30.999741 kubelet[2950]: E0128 04:14:30.999641 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6z88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xf9j7_calico-system(d08f7533-ee5b-4a11-b707-6aef7c12a55d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 04:14:31.001064 kubelet[2950]: E0128 04:14:31.001020 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d" Jan 28 04:14:31.364483 systemd[1]: Started sshd@25-10.230.66.102:22-4.153.228.146:47662.service - OpenSSH per-connection server daemon (4.153.228.146:47662). Jan 28 04:14:31.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.66.102:22-4.153.228.146:47662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:31.375310 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 04:14:31.375710 kernel: audit: type=1130 audit(1769573671.364:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.66.102:22-4.153.228.146:47662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:31.963423 kernel: audit: type=1101 audit(1769573671.955:908): pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:31.955000 audit[5568]: USER_ACCT pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:31.964331 sshd[5568]: Accepted publickey for core from 4.153.228.146 port 47662 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:31.975297 kernel: audit: type=1103 audit(1769573671.969:909): pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:31.969000 audit[5568]: CRED_ACQ pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:31.975177 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:31.982382 kernel: audit: type=1006 audit(1769573671.970:910): pid=5568 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 28 04:14:31.970000 audit[5568]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3caeed90 a2=3 a3=0 items=0 ppid=1 pid=5568 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:31.989076 kernel: audit: type=1300 audit(1769573671.970:910): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3caeed90 a2=3 a3=0 items=0 ppid=1 pid=5568 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:31.998291 kernel: audit: type=1327 audit(1769573671.970:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:31.970000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:32.005286 systemd-logind[1617]: New session 29 of user core. Jan 28 04:14:32.012037 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 04:14:32.021000 audit[5568]: USER_START pid=5568 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:32.030783 kernel: audit: type=1105 audit(1769573672.021:911): pid=5568 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:32.030000 audit[5573]: CRED_ACQ pid=5573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:32.038331 kernel: audit: type=1103 audit(1769573672.030:912): pid=5573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:32.407600 sshd[5573]: Connection closed by 4.153.228.146 port 47662 Jan 28 04:14:32.407925 sshd-session[5568]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:32.412000 audit[5568]: USER_END pid=5568 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:32.419291 kernel: audit: type=1106 audit(1769573672.412:913): pid=5568 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:32.420048 systemd[1]: sshd@25-10.230.66.102:22-4.153.228.146:47662.service: Deactivated successfully. Jan 28 04:14:32.413000 audit[5568]: CRED_DISP pid=5568 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:32.432293 kernel: audit: type=1104 audit(1769573672.413:914): pid=5568 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:32.432851 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 04:14:32.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.66.102:22-4.153.228.146:47662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:32.439393 systemd-logind[1617]: Session 29 logged out. Waiting for processes to exit. Jan 28 04:14:32.445190 systemd-logind[1617]: Removed session 29. Jan 28 04:14:35.615321 containerd[1648]: time="2026-01-28T04:14:35.614039184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 04:14:35.937547 containerd[1648]: time="2026-01-28T04:14:35.937445018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:14:35.940377 containerd[1648]: time="2026-01-28T04:14:35.940323428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 04:14:35.940482 containerd[1648]: time="2026-01-28T04:14:35.940453928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 04:14:35.940943 kubelet[2950]: E0128 04:14:35.940768 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:14:35.940943 kubelet[2950]: E0128 04:14:35.940853 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:14:35.944323 kubelet[2950]: E0128 04:14:35.943456 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6p6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67db4dc4b5-8mhgz_calico-apiserver(ab23ab24-7e12-4864-a3ee-8b4882a74a22): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 04:14:35.945460 kubelet[2950]: E0128 04:14:35.945425 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-8mhgz" podUID="ab23ab24-7e12-4864-a3ee-8b4882a74a22" Jan 28 04:14:36.616852 containerd[1648]: time="2026-01-28T04:14:36.615645703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 04:14:36.951370 containerd[1648]: time="2026-01-28T04:14:36.951296373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:14:36.958690 containerd[1648]: time="2026-01-28T04:14:36.958606899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 04:14:36.958867 containerd[1648]: time="2026-01-28T04:14:36.958643373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 04:14:36.959340 kubelet[2950]: E0128 04:14:36.959026 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:14:36.959340 kubelet[2950]: E0128 04:14:36.959116 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 04:14:36.960776 kubelet[2950]: E0128 04:14:36.959371 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r55sg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67db4dc4b5-cstcv_calico-apiserver(a75b9d4b-fd28-4515-89fe-b1c194b4eb55): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 04:14:36.961451 kubelet[2950]: E0128 04:14:36.961326 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67db4dc4b5-cstcv" podUID="a75b9d4b-fd28-4515-89fe-b1c194b4eb55" Jan 28 04:14:37.507406 systemd[1]: Started sshd@26-10.230.66.102:22-4.153.228.146:60532.service - OpenSSH per-connection server daemon (4.153.228.146:60532). Jan 28 04:14:37.519335 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 04:14:37.519423 kernel: audit: type=1130 audit(1769573677.507:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.66.102:22-4.153.228.146:60532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:37.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.66.102:22-4.153.228.146:60532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:37.620799 containerd[1648]: time="2026-01-28T04:14:37.620647911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 04:14:37.628361 kubelet[2950]: E0128 04:14:37.624658 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-557cdf66d6-zbq82" podUID="597d28d3-837d-4a2a-8aed-8b9a166157ec" Jan 28 04:14:37.974819 containerd[1648]: time="2026-01-28T04:14:37.974658375Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:14:37.976982 containerd[1648]: time="2026-01-28T04:14:37.976861822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 04:14:37.977527 containerd[1648]: time="2026-01-28T04:14:37.977386522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 04:14:37.978604 kubelet[2950]: E0128 04:14:37.977856 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 04:14:37.978604 kubelet[2950]: E0128 04:14:37.977928 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 04:14:37.978604 kubelet[2950]: E0128 04:14:37.978137 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 04:14:37.982113 containerd[1648]: time="2026-01-28T04:14:37.981674158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 04:14:38.094000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.102063 sshd[5607]: Accepted publickey for core from 4.153.228.146 port 60532 ssh2: RSA SHA256:a3RqpJagMyrjBVXmQom8xIMk+bUepmcJ4zvQ7udDZ2o Jan 28 04:14:38.107402 kernel: audit: type=1101 audit(1769573678.094:917): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.109000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.113188 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 04:14:38.117314 kernel: audit: type=1103 audit(1769573678.109:918): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.121350 kernel: audit: type=1006 audit(1769573678.109:919): pid=5607 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 28 04:14:38.109000 audit[5607]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeede568e0 a2=3 a3=0 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:38.130413 kernel: audit: type=1300 audit(1769573678.109:919): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeede568e0 a2=3 a3=0 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 04:14:38.109000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:38.133279 kernel: audit: type=1327 audit(1769573678.109:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 04:14:38.140573 systemd-logind[1617]: New session 30 of user core. Jan 28 04:14:38.146569 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 04:14:38.161304 kernel: audit: type=1105 audit(1769573678.154:920): pid=5607 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.154000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.161000 audit[5611]: CRED_ACQ pid=5611 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.166406 kernel: audit: type=1103 audit(1769573678.161:921): pid=5611 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.298286 containerd[1648]: time="2026-01-28T04:14:38.297661255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:14:38.305964 containerd[1648]: time="2026-01-28T04:14:38.305576532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 04:14:38.305964 containerd[1648]: time="2026-01-28T04:14:38.305749755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 04:14:38.306119 kubelet[2950]: E0128 04:14:38.306022 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 04:14:38.306234 kubelet[2950]: E0128 04:14:38.306109 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 04:14:38.307339 kubelet[2950]: E0128 04:14:38.306399 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqd7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnlhb_calico-system(c2a88baa-8755-4a0f-b81e-f2ef466fcd2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 04:14:38.308293 kubelet[2950]: E0128 04:14:38.307961 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnlhb" podUID="c2a88baa-8755-4a0f-b81e-f2ef466fcd2d" Jan 28 04:14:38.749820 sshd[5611]: Connection closed by 4.153.228.146 port 60532 Jan 28 04:14:38.750389 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Jan 28 04:14:38.753000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.767689 kernel: audit: type=1106 audit(1769573678.753:922): pid=5607 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.771863 systemd[1]: sshd@26-10.230.66.102:22-4.153.228.146:60532.service: Deactivated successfully. Jan 28 04:14:38.777206 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 04:14:38.753000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.786196 kernel: audit: type=1104 audit(1769573678.753:923): pid=5607 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 28 04:14:38.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.66.102:22-4.153.228.146:60532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 04:14:38.788187 systemd-logind[1617]: Session 30 logged out. Waiting for processes to exit. Jan 28 04:14:38.791581 systemd-logind[1617]: Removed session 30. Jan 28 04:14:40.619073 containerd[1648]: time="2026-01-28T04:14:40.617005245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 04:14:40.927165 containerd[1648]: time="2026-01-28T04:14:40.927105594Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 04:14:40.933326 containerd[1648]: time="2026-01-28T04:14:40.933289347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 04:14:40.933625 containerd[1648]: time="2026-01-28T04:14:40.933315129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 04:14:40.933859 kubelet[2950]: E0128 04:14:40.933794 2950 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 04:14:40.934474 kubelet[2950]: E0128 04:14:40.933898 2950 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 04:14:40.935125 kubelet[2950]: E0128 04:14:40.934813 2950 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j65ns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5877564c64-ssm6r_calico-system(53f99505-aca3-4278-8799-01f0eba5681f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 04:14:40.936569 kubelet[2950]: E0128 04:14:40.936487 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5877564c64-ssm6r" podUID="53f99505-aca3-4278-8799-01f0eba5681f" Jan 28 04:14:41.613668 kubelet[2950]: E0128 04:14:41.613423 2950 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xf9j7" podUID="d08f7533-ee5b-4a11-b707-6aef7c12a55d"