Jan 27 13:01:03.440918 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 10:13:49 -00 2026 Jan 27 13:01:03.440976 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5b839912e96169b2be69ecc38c22dede1b19843035b80450c55f71e4c748b699 Jan 27 13:01:03.440991 kernel: BIOS-provided physical RAM map: Jan 27 13:01:03.441002 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 27 13:01:03.441751 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 27 13:01:03.441771 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 27 13:01:03.441791 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 27 13:01:03.441814 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 27 13:01:03.441826 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 27 13:01:03.441837 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 27 13:01:03.441848 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 27 13:01:03.441859 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 27 13:01:03.441870 kernel: NX (Execute Disable) protection: active Jan 27 13:01:03.441903 kernel: APIC: Static calls initialized Jan 27 13:01:03.441916 kernel: SMBIOS 2.8 present. Jan 27 13:01:03.441929 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 27 13:01:03.441948 kernel: DMI: Memory slots populated: 1/1 Jan 27 13:01:03.441974 kernel: Hypervisor detected: KVM Jan 27 13:01:03.441986 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 27 13:01:03.441998 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 27 13:01:03.442010 kernel: kvm-clock: using sched offset of 5429787178 cycles Jan 27 13:01:03.442022 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 27 13:01:03.442035 kernel: tsc: Detected 2799.998 MHz processor Jan 27 13:01:03.442047 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 27 13:01:03.442060 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 27 13:01:03.442085 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 27 13:01:03.442098 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 27 13:01:03.442110 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 27 13:01:03.442122 kernel: Using GB pages for direct mapping Jan 27 13:01:03.442134 kernel: ACPI: Early table checksum verification disabled Jan 27 13:01:03.442146 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 27 13:01:03.442158 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 13:01:03.442170 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 13:01:03.442197 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 13:01:03.442221 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 27 13:01:03.442232 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 13:01:03.442244 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 13:01:03.442256 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 13:01:03.442267 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 27 13:01:03.442292 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 27 13:01:03.442324 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 27 13:01:03.442336 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 27 13:01:03.442347 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 27 13:01:03.442372 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 27 13:01:03.442407 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 27 13:01:03.442420 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 27 13:01:03.442432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 27 13:01:03.442444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 27 13:01:03.442457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 27 13:01:03.442469 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 27 13:01:03.442482 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 27 13:01:03.442508 kernel: Zone ranges: Jan 27 13:01:03.442522 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 27 13:01:03.442535 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 27 13:01:03.442547 kernel: Normal empty Jan 27 13:01:03.442559 kernel: Device empty Jan 27 13:01:03.442572 kernel: Movable zone start for each node Jan 27 13:01:03.442584 kernel: Early memory node ranges Jan 27 13:01:03.442597 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 27 13:01:03.442623 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 27 13:01:03.442643 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 27 13:01:03.442656 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 27 13:01:03.442674 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 27 13:01:03.442688 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 27 13:01:03.442716 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 27 13:01:03.442736 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 27 13:01:03.442763 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 27 13:01:03.442776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 27 13:01:03.442789 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 27 13:01:03.442802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 27 13:01:03.442814 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 27 13:01:03.442827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 27 13:01:03.442839 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 27 13:01:03.442864 kernel: TSC deadline timer available Jan 27 13:01:03.442878 kernel: CPU topo: Max. logical packages: 16 Jan 27 13:01:03.442890 kernel: CPU topo: Max. logical dies: 16 Jan 27 13:01:03.442903 kernel: CPU topo: Max. dies per package: 1 Jan 27 13:01:03.442915 kernel: CPU topo: Max. threads per core: 1 Jan 27 13:01:03.442928 kernel: CPU topo: Num. cores per package: 1 Jan 27 13:01:03.442940 kernel: CPU topo: Num. threads per package: 1 Jan 27 13:01:03.442952 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 27 13:01:03.442978 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 27 13:01:03.442991 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 27 13:01:03.443003 kernel: Booting paravirtualized kernel on KVM Jan 27 13:01:03.443021 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 27 13:01:03.443034 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 27 13:01:03.443046 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 27 13:01:03.443059 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 27 13:01:03.443084 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 27 13:01:03.443097 kernel: kvm-guest: PV spinlocks enabled Jan 27 13:01:03.443109 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 27 13:01:03.443123 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5b839912e96169b2be69ecc38c22dede1b19843035b80450c55f71e4c748b699 Jan 27 13:01:03.443137 kernel: random: crng init done Jan 27 13:01:03.443149 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 27 13:01:03.443162 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 27 13:01:03.443190 kernel: Fallback order for Node 0: 0 Jan 27 13:01:03.443203 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 27 13:01:03.443228 kernel: Policy zone: DMA32 Jan 27 13:01:03.443240 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 27 13:01:03.443252 kernel: software IO TLB: area num 16. Jan 27 13:01:03.443264 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 27 13:01:03.443276 kernel: Kernel/User page tables isolation: enabled Jan 27 13:01:03.443302 kernel: ftrace: allocating 40128 entries in 157 pages Jan 27 13:01:03.443314 kernel: ftrace: allocated 157 pages with 5 groups Jan 27 13:01:03.443326 kernel: Dynamic Preempt: voluntary Jan 27 13:01:03.443338 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 27 13:01:03.443356 kernel: rcu: RCU event tracing is enabled. Jan 27 13:01:03.443392 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 27 13:01:03.443405 kernel: Trampoline variant of Tasks RCU enabled. Jan 27 13:01:03.443436 kernel: Rude variant of Tasks RCU enabled. Jan 27 13:01:03.443451 kernel: Tracing variant of Tasks RCU enabled. Jan 27 13:01:03.443464 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 27 13:01:03.443476 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 27 13:01:03.443489 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 27 13:01:03.443502 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 27 13:01:03.443514 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 27 13:01:03.443541 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 27 13:01:03.443554 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 27 13:01:03.443595 kernel: Console: colour VGA+ 80x25 Jan 27 13:01:03.443620 kernel: printk: legacy console [tty0] enabled Jan 27 13:01:03.443634 kernel: printk: legacy console [ttyS0] enabled Jan 27 13:01:03.443653 kernel: ACPI: Core revision 20240827 Jan 27 13:01:03.443680 kernel: APIC: Switch to symmetric I/O mode setup Jan 27 13:01:03.443692 kernel: x2apic enabled Jan 27 13:01:03.443807 kernel: APIC: Switched APIC routing to: physical x2apic Jan 27 13:01:03.443843 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 27 13:01:03.443858 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 27 13:01:03.443871 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 27 13:01:03.443884 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 27 13:01:03.443918 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 27 13:01:03.443931 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 27 13:01:03.443944 kernel: Spectre V2 : Mitigation: Retpolines Jan 27 13:01:03.443956 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 27 13:01:03.443969 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 27 13:01:03.443982 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 27 13:01:03.443994 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 27 13:01:03.444007 kernel: MDS: Mitigation: Clear CPU buffers Jan 27 13:01:03.444020 kernel: MMIO Stale Data: Unknown: No mitigations Jan 27 13:01:03.444032 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 27 13:01:03.444045 kernel: active return thunk: its_return_thunk Jan 27 13:01:03.444072 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 27 13:01:03.444085 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 27 13:01:03.444098 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 27 13:01:03.444110 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 27 13:01:03.444123 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 27 13:01:03.444136 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 27 13:01:03.444148 kernel: Freeing SMP alternatives memory: 32K Jan 27 13:01:03.444161 kernel: pid_max: default: 32768 minimum: 301 Jan 27 13:01:03.444174 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 27 13:01:03.444186 kernel: landlock: Up and running. Jan 27 13:01:03.444213 kernel: SELinux: Initializing. Jan 27 13:01:03.444226 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 27 13:01:03.444239 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 27 13:01:03.444252 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 27 13:01:03.444265 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 27 13:01:03.444278 kernel: signal: max sigframe size: 1776 Jan 27 13:01:03.444298 kernel: rcu: Hierarchical SRCU implementation. Jan 27 13:01:03.444312 kernel: rcu: Max phase no-delay instances is 400. Jan 27 13:01:03.444338 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 27 13:01:03.444352 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 27 13:01:03.444366 kernel: smp: Bringing up secondary CPUs ... Jan 27 13:01:03.444387 kernel: smpboot: x86: Booting SMP configuration: Jan 27 13:01:03.444401 kernel: .... node #0, CPUs: #1 Jan 27 13:01:03.444414 kernel: smp: Brought up 1 node, 2 CPUs Jan 27 13:01:03.444428 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 27 13:01:03.444442 kernel: Memory: 1912056K/2096616K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15532K init, 2504K bss, 178544K reserved, 0K cma-reserved) Jan 27 13:01:03.444470 kernel: devtmpfs: initialized Jan 27 13:01:03.444484 kernel: x86/mm: Memory block size: 128MB Jan 27 13:01:03.444497 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 27 13:01:03.444510 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 27 13:01:03.444523 kernel: pinctrl core: initialized pinctrl subsystem Jan 27 13:01:03.444536 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 27 13:01:03.444550 kernel: audit: initializing netlink subsys (disabled) Jan 27 13:01:03.444577 kernel: audit: type=2000 audit(1769518859.650:1): state=initialized audit_enabled=0 res=1 Jan 27 13:01:03.444590 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 27 13:01:03.444603 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 27 13:01:03.444616 kernel: cpuidle: using governor menu Jan 27 13:01:03.444629 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 27 13:01:03.444642 kernel: dca service started, version 1.12.1 Jan 27 13:01:03.444662 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 27 13:01:03.444704 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 27 13:01:03.444719 kernel: PCI: Using configuration type 1 for base access Jan 27 13:01:03.444732 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 27 13:01:03.444746 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 27 13:01:03.444759 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 27 13:01:03.444772 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 27 13:01:03.444785 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 27 13:01:03.444814 kernel: ACPI: Added _OSI(Module Device) Jan 27 13:01:03.444828 kernel: ACPI: Added _OSI(Processor Device) Jan 27 13:01:03.444841 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 27 13:01:03.444854 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 27 13:01:03.444867 kernel: ACPI: Interpreter enabled Jan 27 13:01:03.444880 kernel: ACPI: PM: (supports S0 S5) Jan 27 13:01:03.444893 kernel: ACPI: Using IOAPIC for interrupt routing Jan 27 13:01:03.444920 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 27 13:01:03.444934 kernel: PCI: Using E820 reservations for host bridge windows Jan 27 13:01:03.444947 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 27 13:01:03.444960 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 27 13:01:03.445263 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 27 13:01:03.445505 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 27 13:01:03.445767 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 27 13:01:03.445788 kernel: PCI host bridge to bus 0000:00 Jan 27 13:01:03.446026 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 27 13:01:03.446226 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 27 13:01:03.446437 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 27 13:01:03.446636 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 27 13:01:03.446880 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 27 13:01:03.447078 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 27 13:01:03.447286 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 27 13:01:03.447552 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 27 13:01:03.447860 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 27 13:01:03.448102 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 27 13:01:03.448317 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 27 13:01:03.448554 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 27 13:01:03.448790 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 27 13:01:03.449031 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 13:01:03.449245 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 27 13:01:03.449495 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 13:01:03.450454 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 27 13:01:03.450684 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 27 13:01:03.450937 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 13:01:03.451153 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 27 13:01:03.451365 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 13:01:03.451612 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 27 13:01:03.451847 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 27 13:01:03.452074 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 13:01:03.452286 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 27 13:01:03.452514 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 13:01:03.452784 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 27 13:01:03.453001 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 27 13:01:03.453239 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 13:01:03.453466 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 27 13:01:03.453681 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 13:01:03.453940 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 27 13:01:03.454178 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 27 13:01:03.454423 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 13:01:03.454650 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 27 13:01:03.454899 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 13:01:03.455112 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 27 13:01:03.455324 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 27 13:01:03.455586 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 13:01:03.455822 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 27 13:01:03.456036 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 13:01:03.456248 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 27 13:01:03.456473 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 27 13:01:03.456721 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 13:01:03.456962 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 27 13:01:03.457174 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 13:01:03.457397 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 27 13:01:03.457627 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 27 13:01:03.457872 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 27 13:01:03.458111 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 27 13:01:03.458324 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 13:01:03.458549 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 27 13:01:03.458795 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 27 13:01:03.459028 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 27 13:01:03.459243 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 27 13:01:03.459496 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 27 13:01:03.459737 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 27 13:01:03.459986 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 27 13:01:03.460220 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 27 13:01:03.460446 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 27 13:01:03.460658 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 27 13:01:03.460923 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 27 13:01:03.461165 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 27 13:01:03.461412 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 27 13:01:03.461706 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 27 13:01:03.461928 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 27 13:01:03.462165 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 27 13:01:03.462427 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 27 13:01:03.462653 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 27 13:01:03.462950 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 27 13:01:03.463174 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 27 13:01:03.463403 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 13:01:03.463645 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 27 13:01:03.463881 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 13:01:03.464126 kernel: pci_bus 0000:02: extended config space not accessible Jan 27 13:01:03.464371 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 27 13:01:03.464610 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 27 13:01:03.464861 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 13:01:03.465134 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 27 13:01:03.465355 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 27 13:01:03.465586 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 13:01:03.465875 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 27 13:01:03.466100 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 27 13:01:03.466340 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 13:01:03.466568 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 13:01:03.466802 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 13:01:03.467019 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 13:01:03.467232 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 13:01:03.467460 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 13:01:03.467500 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 27 13:01:03.467515 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 27 13:01:03.467528 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 27 13:01:03.467541 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 27 13:01:03.467564 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 27 13:01:03.467585 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 27 13:01:03.467599 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 27 13:01:03.467627 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 27 13:01:03.467640 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 27 13:01:03.467654 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 27 13:01:03.467667 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 27 13:01:03.467680 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 27 13:01:03.467719 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 27 13:01:03.467734 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 27 13:01:03.467765 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 27 13:01:03.467778 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 27 13:01:03.467792 kernel: iommu: Default domain type: Translated Jan 27 13:01:03.467805 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 27 13:01:03.467818 kernel: PCI: Using ACPI for IRQ routing Jan 27 13:01:03.467831 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 27 13:01:03.467844 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 27 13:01:03.467871 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 27 13:01:03.468087 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 27 13:01:03.468301 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 27 13:01:03.468525 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 27 13:01:03.468546 kernel: vgaarb: loaded Jan 27 13:01:03.468560 kernel: clocksource: Switched to clocksource kvm-clock Jan 27 13:01:03.468591 kernel: VFS: Disk quotas dquot_6.6.0 Jan 27 13:01:03.468606 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 27 13:01:03.468619 kernel: pnp: PnP ACPI init Jan 27 13:01:03.468873 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 27 13:01:03.468896 kernel: pnp: PnP ACPI: found 5 devices Jan 27 13:01:03.468910 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 27 13:01:03.468924 kernel: NET: Registered PF_INET protocol family Jan 27 13:01:03.468955 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 27 13:01:03.468969 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 27 13:01:03.468983 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 27 13:01:03.468996 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 27 13:01:03.469009 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 27 13:01:03.469023 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 27 13:01:03.469036 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 27 13:01:03.469064 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 27 13:01:03.469078 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 27 13:01:03.469091 kernel: NET: Registered PF_XDP protocol family Jan 27 13:01:03.469303 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 27 13:01:03.469534 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 27 13:01:03.469780 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 27 13:01:03.470022 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 27 13:01:03.470279 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 27 13:01:03.470504 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 27 13:01:03.470736 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 27 13:01:03.470962 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 27 13:01:03.471175 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 27 13:01:03.471397 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 27 13:01:03.471631 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 27 13:01:03.471866 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 27 13:01:03.472090 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 27 13:01:03.472310 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 27 13:01:03.472535 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 27 13:01:03.472783 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 27 13:01:03.473002 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 27 13:01:03.473318 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 27 13:01:03.473544 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 27 13:01:03.473795 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 27 13:01:03.474009 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 27 13:01:03.474226 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 27 13:01:03.474451 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 27 13:01:03.474663 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 27 13:01:03.474914 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 27 13:01:03.475127 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 27 13:01:03.475339 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 27 13:01:03.475564 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 27 13:01:03.475800 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 27 13:01:03.477715 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 27 13:01:03.477944 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 27 13:01:03.478161 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 27 13:01:03.478376 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 27 13:01:03.478623 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 27 13:01:03.478887 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 27 13:01:03.479103 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 27 13:01:03.479316 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 27 13:01:03.479544 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 27 13:01:03.479777 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 27 13:01:03.479991 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 27 13:01:03.480228 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 27 13:01:03.480455 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 27 13:01:03.480669 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 27 13:01:03.480904 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 27 13:01:03.481116 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 27 13:01:03.481375 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 27 13:01:03.481612 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 27 13:01:03.481866 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 27 13:01:03.482089 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 27 13:01:03.482324 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 27 13:01:03.482563 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 27 13:01:03.482786 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 27 13:01:03.483002 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 27 13:01:03.483215 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 27 13:01:03.483463 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 27 13:01:03.483660 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 27 13:01:03.483914 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 27 13:01:03.484139 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 27 13:01:03.484356 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 27 13:01:03.484599 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 27 13:01:03.484845 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 27 13:01:03.485048 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 27 13:01:03.485283 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 27 13:01:03.485537 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 27 13:01:03.485762 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 27 13:01:03.485989 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 27 13:01:03.486233 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 27 13:01:03.486450 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 27 13:01:03.486652 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 27 13:01:03.486894 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 27 13:01:03.487126 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 27 13:01:03.487352 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 27 13:01:03.487592 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 27 13:01:03.487818 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 27 13:01:03.488044 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 27 13:01:03.488273 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 27 13:01:03.488510 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 27 13:01:03.488744 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 27 13:01:03.488957 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 27 13:01:03.489160 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 27 13:01:03.489361 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 27 13:01:03.489392 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 27 13:01:03.489433 kernel: PCI: CLS 0 bytes, default 64 Jan 27 13:01:03.489449 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 27 13:01:03.489463 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 27 13:01:03.489477 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 27 13:01:03.489491 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 27 13:01:03.489505 kernel: Initialise system trusted keyrings Jan 27 13:01:03.489519 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 27 13:01:03.489547 kernel: Key type asymmetric registered Jan 27 13:01:03.489561 kernel: Asymmetric key parser 'x509' registered Jan 27 13:01:03.489575 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 27 13:01:03.489589 kernel: io scheduler mq-deadline registered Jan 27 13:01:03.489602 kernel: io scheduler kyber registered Jan 27 13:01:03.489616 kernel: io scheduler bfq registered Jan 27 13:01:03.489849 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 27 13:01:03.490095 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 27 13:01:03.490303 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 27 13:01:03.490551 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 27 13:01:03.490785 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 27 13:01:03.491045 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 27 13:01:03.491291 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 27 13:01:03.491519 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 27 13:01:03.491755 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 27 13:01:03.491969 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 27 13:01:03.492187 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 27 13:01:03.492441 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 27 13:01:03.492655 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 27 13:01:03.492889 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 27 13:01:03.493113 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 27 13:01:03.493336 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 27 13:01:03.493562 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 27 13:01:03.493822 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 27 13:01:03.494037 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 27 13:01:03.494282 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 27 13:01:03.494517 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 27 13:01:03.494774 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 27 13:01:03.494990 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 27 13:01:03.495203 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 27 13:01:03.495224 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 27 13:01:03.495240 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 27 13:01:03.495254 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 27 13:01:03.495286 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 27 13:01:03.495302 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 27 13:01:03.495316 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 27 13:01:03.495330 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 27 13:01:03.495344 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 27 13:01:03.495596 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 27 13:01:03.495619 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 27 13:01:03.495866 kernel: rtc_cmos 00:03: registered as rtc0 Jan 27 13:01:03.496073 kernel: rtc_cmos 00:03: setting system clock to 2026-01-27T13:01:01 UTC (1769518861) Jan 27 13:01:03.496276 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 27 13:01:03.496296 kernel: intel_pstate: CPU model not supported Jan 27 13:01:03.496311 kernel: NET: Registered PF_INET6 protocol family Jan 27 13:01:03.496324 kernel: Segment Routing with IPv6 Jan 27 13:01:03.496338 kernel: In-situ OAM (IOAM) with IPv6 Jan 27 13:01:03.496374 kernel: NET: Registered PF_PACKET protocol family Jan 27 13:01:03.496398 kernel: Key type dns_resolver registered Jan 27 13:01:03.496412 kernel: IPI shorthand broadcast: enabled Jan 27 13:01:03.496440 kernel: sched_clock: Marking stable (2428004292, 217101106)->(2782935328, -137829930) Jan 27 13:01:03.496456 kernel: registered taskstats version 1 Jan 27 13:01:03.496484 kernel: Loading compiled-in X.509 certificates Jan 27 13:01:03.496499 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 6484c7cab6759552a733ebda9eed387628fa30ee' Jan 27 13:01:03.496526 kernel: Demotion targets for Node 0: null Jan 27 13:01:03.496540 kernel: Key type .fscrypt registered Jan 27 13:01:03.496553 kernel: Key type fscrypt-provisioning registered Jan 27 13:01:03.496567 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 27 13:01:03.496581 kernel: ima: Allocated hash algorithm: sha1 Jan 27 13:01:03.496595 kernel: ima: No architecture policies found Jan 27 13:01:03.496609 kernel: clk: Disabling unused clocks Jan 27 13:01:03.496635 kernel: Freeing unused kernel image (initmem) memory: 15532K Jan 27 13:01:03.496651 kernel: Write protecting the kernel read-only data: 47104k Jan 27 13:01:03.496664 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 27 13:01:03.496679 kernel: Run /init as init process Jan 27 13:01:03.496743 kernel: with arguments: Jan 27 13:01:03.496759 kernel: /init Jan 27 13:01:03.496772 kernel: with environment: Jan 27 13:01:03.496802 kernel: HOME=/ Jan 27 13:01:03.496817 kernel: TERM=linux Jan 27 13:01:03.496831 kernel: ACPI: bus type USB registered Jan 27 13:01:03.496845 kernel: usbcore: registered new interface driver usbfs Jan 27 13:01:03.496858 kernel: usbcore: registered new interface driver hub Jan 27 13:01:03.496872 kernel: usbcore: registered new device driver usb Jan 27 13:01:03.497109 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 27 13:01:03.497354 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 27 13:01:03.497604 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 27 13:01:03.497849 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 27 13:01:03.498069 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 27 13:01:03.498323 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 27 13:01:03.498623 kernel: hub 1-0:1.0: USB hub found Jan 27 13:01:03.498882 kernel: hub 1-0:1.0: 4 ports detected Jan 27 13:01:03.499154 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 27 13:01:03.499418 kernel: hub 2-0:1.0: USB hub found Jan 27 13:01:03.499653 kernel: hub 2-0:1.0: 4 ports detected Jan 27 13:01:03.499674 kernel: SCSI subsystem initialized Jan 27 13:01:03.499706 kernel: libata version 3.00 loaded. Jan 27 13:01:03.499945 kernel: ahci 0000:00:1f.2: version 3.0 Jan 27 13:01:03.499967 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 27 13:01:03.500175 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 27 13:01:03.500405 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 27 13:01:03.500621 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 27 13:01:03.500952 kernel: scsi host0: ahci Jan 27 13:01:03.501206 kernel: scsi host1: ahci Jan 27 13:01:03.501463 kernel: scsi host2: ahci Jan 27 13:01:03.501710 kernel: scsi host3: ahci Jan 27 13:01:03.501953 kernel: scsi host4: ahci Jan 27 13:01:03.502180 kernel: scsi host5: ahci Jan 27 13:01:03.502221 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Jan 27 13:01:03.502236 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Jan 27 13:01:03.502250 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Jan 27 13:01:03.502264 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Jan 27 13:01:03.502278 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Jan 27 13:01:03.502292 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Jan 27 13:01:03.502572 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 27 13:01:03.502596 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 27 13:01:03.502610 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 27 13:01:03.502624 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 27 13:01:03.502638 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 27 13:01:03.502665 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 27 13:01:03.502678 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 27 13:01:03.502725 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 27 13:01:03.502751 kernel: usbcore: registered new interface driver usbhid Jan 27 13:01:03.502989 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 27 13:01:03.503011 kernel: usbhid: USB HID core driver Jan 27 13:01:03.503218 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 27 13:01:03.503239 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 27 13:01:03.503273 kernel: GPT:25804799 != 125829119 Jan 27 13:01:03.503287 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 27 13:01:03.503302 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 27 13:01:03.503587 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 27 13:01:03.503610 kernel: GPT:25804799 != 125829119 Jan 27 13:01:03.503624 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 27 13:01:03.503668 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 27 13:01:03.503684 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 27 13:01:03.503697 kernel: device-mapper: uevent: version 1.0.3 Jan 27 13:01:03.503744 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 27 13:01:03.503758 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 27 13:01:03.503772 kernel: raid6: sse2x4 gen() 14213 MB/s Jan 27 13:01:03.503786 kernel: raid6: sse2x2 gen() 9969 MB/s Jan 27 13:01:03.503817 kernel: raid6: sse2x1 gen() 10372 MB/s Jan 27 13:01:03.503832 kernel: raid6: using algorithm sse2x4 gen() 14213 MB/s Jan 27 13:01:03.503847 kernel: raid6: .... xor() 8161 MB/s, rmw enabled Jan 27 13:01:03.503861 kernel: raid6: using ssse3x2 recovery algorithm Jan 27 13:01:03.503875 kernel: xor: automatically using best checksumming function avx Jan 27 13:01:03.503889 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 27 13:01:03.503903 kernel: BTRFS: device fsid 268ba60b-442b-419d-aa1b-56d952d69f85 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (194) Jan 27 13:01:03.503931 kernel: BTRFS info (device dm-0): first mount of filesystem 268ba60b-442b-419d-aa1b-56d952d69f85 Jan 27 13:01:03.503946 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 27 13:01:03.503960 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 27 13:01:03.503974 kernel: BTRFS info (device dm-0): enabling free space tree Jan 27 13:01:03.503987 kernel: loop: module loaded Jan 27 13:01:03.504001 kernel: loop0: detected capacity change from 0 to 100536 Jan 27 13:01:03.504015 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 27 13:01:03.504044 systemd[1]: Successfully made /usr/ read-only. Jan 27 13:01:03.504063 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 27 13:01:03.504079 systemd[1]: Detected virtualization kvm. Jan 27 13:01:03.504093 systemd[1]: Detected architecture x86-64. Jan 27 13:01:03.504107 systemd[1]: Running in initrd. Jan 27 13:01:03.504121 systemd[1]: No hostname configured, using default hostname. Jan 27 13:01:03.504150 systemd[1]: Hostname set to . Jan 27 13:01:03.504165 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 27 13:01:03.504179 systemd[1]: Queued start job for default target initrd.target. Jan 27 13:01:03.504194 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 27 13:01:03.504208 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 13:01:03.504223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 13:01:03.504252 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 27 13:01:03.504268 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 27 13:01:03.504284 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 27 13:01:03.504299 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 27 13:01:03.504313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 13:01:03.504328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 27 13:01:03.504358 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 27 13:01:03.504372 systemd[1]: Reached target paths.target - Path Units. Jan 27 13:01:03.504398 systemd[1]: Reached target slices.target - Slice Units. Jan 27 13:01:03.504413 systemd[1]: Reached target swap.target - Swaps. Jan 27 13:01:03.504428 systemd[1]: Reached target timers.target - Timer Units. Jan 27 13:01:03.504442 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 27 13:01:03.504457 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 27 13:01:03.504487 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 27 13:01:03.504502 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 27 13:01:03.504517 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 27 13:01:03.504531 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 27 13:01:03.504546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 27 13:01:03.504560 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 13:01:03.504575 systemd[1]: Reached target sockets.target - Socket Units. Jan 27 13:01:03.504604 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 27 13:01:03.504619 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 27 13:01:03.504635 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 27 13:01:03.504649 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 27 13:01:03.504665 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 27 13:01:03.504680 systemd[1]: Starting systemd-fsck-usr.service... Jan 27 13:01:03.504723 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 27 13:01:03.504740 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 27 13:01:03.504756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 13:01:03.504771 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 27 13:01:03.504802 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 13:01:03.504817 systemd[1]: Finished systemd-fsck-usr.service. Jan 27 13:01:03.504832 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 27 13:01:03.504893 systemd-journald[331]: Collecting audit messages is enabled. Jan 27 13:01:03.504947 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 27 13:01:03.504962 kernel: Bridge firewalling registered Jan 27 13:01:03.504977 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 27 13:01:03.504993 systemd-journald[331]: Journal started Jan 27 13:01:03.505039 systemd-journald[331]: Runtime Journal (/run/log/journal/291f86d398f54868b3987dd0f033571b) is 4.7M, max 37.7M, 33M free. Jan 27 13:01:03.437577 systemd-modules-load[333]: Inserted module 'br_netfilter' Jan 27 13:01:03.521030 kernel: audit: type=1130 audit(1769518863.512:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.521061 systemd[1]: Started systemd-journald.service - Journal Service. Jan 27 13:01:03.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.526781 kernel: audit: type=1130 audit(1769518863.520:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.526870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 13:01:03.533533 kernel: audit: type=1130 audit(1769518863.526:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.529758 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 27 13:01:03.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.541718 kernel: audit: type=1130 audit(1769518863.533:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.543561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 27 13:01:03.545257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 27 13:01:03.549849 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 27 13:01:03.553860 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 27 13:01:03.577806 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 13:01:03.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.586710 kernel: audit: type=1130 audit(1769518863.577:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.585250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 27 13:01:03.587556 systemd-tmpfiles[350]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 27 13:01:03.599839 kernel: audit: type=1130 audit(1769518863.585:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.599868 kernel: audit: type=1334 audit(1769518863.589:8): prog-id=6 op=LOAD Jan 27 13:01:03.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.589000 audit: BPF prog-id=6 op=LOAD Jan 27 13:01:03.591677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 27 13:01:03.601005 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 27 13:01:03.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.607756 kernel: audit: type=1130 audit(1769518863.601:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.610122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 13:01:03.617574 kernel: audit: type=1130 audit(1769518863.610:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.613050 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 27 13:01:03.643732 dracut-cmdline[371]: dracut-109 Jan 27 13:01:03.646491 dracut-cmdline[371]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5b839912e96169b2be69ecc38c22dede1b19843035b80450c55f71e4c748b699 Jan 27 13:01:03.684264 systemd-resolved[365]: Positive Trust Anchors: Jan 27 13:01:03.684285 systemd-resolved[365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 27 13:01:03.684292 systemd-resolved[365]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 27 13:01:03.684345 systemd-resolved[365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 27 13:01:03.726990 systemd-resolved[365]: Defaulting to hostname 'linux'. Jan 27 13:01:03.729828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 27 13:01:03.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.731427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 27 13:01:03.765826 kernel: Loading iSCSI transport class v2.0-870. Jan 27 13:01:03.783994 kernel: iscsi: registered transport (tcp) Jan 27 13:01:03.814970 kernel: iscsi: registered transport (qla4xxx) Jan 27 13:01:03.815058 kernel: QLogic iSCSI HBA Driver Jan 27 13:01:03.848871 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 27 13:01:03.873008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 13:01:03.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.874968 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 27 13:01:03.941598 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 27 13:01:03.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.943909 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 27 13:01:03.946848 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 27 13:01:03.984282 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 27 13:01:03.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:03.985000 audit: BPF prog-id=7 op=LOAD Jan 27 13:01:03.986000 audit: BPF prog-id=8 op=LOAD Jan 27 13:01:03.987921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 13:01:04.033850 systemd-udevd[600]: Using default interface naming scheme 'v257'. Jan 27 13:01:04.050605 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 13:01:04.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.054928 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 27 13:01:04.138184 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 27 13:01:04.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.140000 audit: BPF prog-id=9 op=LOAD Jan 27 13:01:04.142346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 27 13:01:04.160964 dracut-pre-trigger[678]: rd.md=0: removing MD RAID activation Jan 27 13:01:04.203626 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 27 13:01:04.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.207921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 27 13:01:04.218113 systemd-networkd[703]: lo: Link UP Jan 27 13:01:04.218124 systemd-networkd[703]: lo: Gained carrier Jan 27 13:01:04.220881 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 27 13:01:04.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.222683 systemd[1]: Reached target network.target - Network. Jan 27 13:01:04.361396 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 13:01:04.374381 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 27 13:01:04.374416 kernel: audit: type=1130 audit(1769518864.362:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.364946 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 27 13:01:04.525209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 27 13:01:04.538861 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 27 13:01:04.565843 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 27 13:01:04.588025 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 27 13:01:04.592105 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 27 13:01:04.599024 kernel: cryptd: max_cpu_qlen set to 1000 Jan 27 13:01:04.618758 kernel: AES CTR mode by8 optimization enabled Jan 27 13:01:04.630775 disk-uuid[770]: Primary Header is updated. Jan 27 13:01:04.630775 disk-uuid[770]: Secondary Entries is updated. Jan 27 13:01:04.630775 disk-uuid[770]: Secondary Header is updated. Jan 27 13:01:04.657747 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 27 13:01:04.733639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 13:01:04.735152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 13:01:04.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.738443 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 13:01:04.746497 kernel: audit: type=1131 audit(1769518864.737:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.749793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 13:01:04.760065 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 13:01:04.763731 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 27 13:01:04.766471 systemd-networkd[703]: eth0: Link UP Jan 27 13:01:04.766799 systemd-networkd[703]: eth0: Gained carrier Jan 27 13:01:04.766814 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 13:01:04.776789 systemd-networkd[703]: eth0: DHCPv4 address 10.230.66.190/30, gateway 10.230.66.189 acquired from 10.230.66.189 Jan 27 13:01:04.820105 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 27 13:01:04.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.876738 kernel: audit: type=1130 audit(1769518864.871:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.876495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 13:01:04.882753 kernel: audit: type=1130 audit(1769518864.876:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.879528 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 27 13:01:04.883490 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 13:01:04.885058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 27 13:01:04.888007 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 27 13:01:04.914047 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 27 13:01:04.920159 kernel: audit: type=1130 audit(1769518864.914:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:04.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:05.728039 disk-uuid[785]: Warning: The kernel is still using the old partition table. Jan 27 13:01:05.728039 disk-uuid[785]: The new table will be used at the next reboot or after you Jan 27 13:01:05.728039 disk-uuid[785]: run partprobe(8) or kpartx(8) Jan 27 13:01:05.728039 disk-uuid[785]: The operation has completed successfully. Jan 27 13:01:05.742018 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 27 13:01:05.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:05.742211 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 27 13:01:05.750732 kernel: audit: type=1130 audit(1769518865.742:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:05.748824 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 27 13:01:05.761200 kernel: audit: type=1131 audit(1769518865.742:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:05.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:05.803740 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (856) Jan 27 13:01:05.808758 kernel: BTRFS info (device vda6): first mount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 13:01:05.808905 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 13:01:05.814409 kernel: BTRFS info (device vda6): turning on async discard Jan 27 13:01:05.814448 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 13:01:05.822725 kernel: BTRFS info (device vda6): last unmount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 13:01:05.823605 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 27 13:01:05.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:05.826975 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 27 13:01:05.831929 kernel: audit: type=1130 audit(1769518865.823:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.139803 ignition[875]: Ignition 2.24.0 Jan 27 13:01:06.139839 ignition[875]: Stage: fetch-offline Jan 27 13:01:06.139961 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jan 27 13:01:06.139990 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 13:01:06.143164 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 27 13:01:06.149729 kernel: audit: type=1130 audit(1769518866.143:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.140183 ignition[875]: parsed url from cmdline: "" Jan 27 13:01:06.146933 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 27 13:01:06.140190 ignition[875]: no config URL provided Jan 27 13:01:06.140200 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Jan 27 13:01:06.140219 ignition[875]: no config at "/usr/lib/ignition/user.ign" Jan 27 13:01:06.140227 ignition[875]: failed to fetch config: resource requires networking Jan 27 13:01:06.140734 ignition[875]: Ignition finished successfully Jan 27 13:01:06.194299 ignition[884]: Ignition 2.24.0 Jan 27 13:01:06.195303 ignition[884]: Stage: fetch Jan 27 13:01:06.195610 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 27 13:01:06.195629 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 13:01:06.195865 ignition[884]: parsed url from cmdline: "" Jan 27 13:01:06.195872 ignition[884]: no config URL provided Jan 27 13:01:06.195904 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 27 13:01:06.195921 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 27 13:01:06.196353 ignition[884]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 27 13:01:06.197384 ignition[884]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 27 13:01:06.197415 ignition[884]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 27 13:01:06.219390 ignition[884]: GET result: OK Jan 27 13:01:06.219775 ignition[884]: parsing config with SHA512: 6c1d94cff537872611dcdfc367a96fb0dcd172e35577d7de4643f4837a1875e3aecda34b499751a32a8592ccca3b0f8aac76e28b68b76b30a1eee8b76670c9c5 Jan 27 13:01:06.231050 unknown[884]: fetched base config from "system" Jan 27 13:01:06.231071 unknown[884]: fetched base config from "system" Jan 27 13:01:06.231501 ignition[884]: fetch: fetch complete Jan 27 13:01:06.231080 unknown[884]: fetched user config from "openstack" Jan 27 13:01:06.231510 ignition[884]: fetch: fetch passed Jan 27 13:01:06.239814 kernel: audit: type=1130 audit(1769518866.234:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.233929 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 27 13:01:06.231831 ignition[884]: Ignition finished successfully Jan 27 13:01:06.236888 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 27 13:01:06.269106 ignition[890]: Ignition 2.24.0 Jan 27 13:01:06.269129 ignition[890]: Stage: kargs Jan 27 13:01:06.269370 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 27 13:01:06.272273 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 27 13:01:06.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.269389 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 13:01:06.270657 ignition[890]: kargs: kargs passed Jan 27 13:01:06.270743 ignition[890]: Ignition finished successfully Jan 27 13:01:06.276390 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 27 13:01:06.311145 ignition[896]: Ignition 2.24.0 Jan 27 13:01:06.312134 ignition[896]: Stage: disks Jan 27 13:01:06.312445 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 27 13:01:06.312464 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 13:01:06.316491 ignition[896]: disks: disks passed Jan 27 13:01:06.317207 ignition[896]: Ignition finished successfully Jan 27 13:01:06.319814 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 27 13:01:06.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.320832 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 27 13:01:06.321921 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 27 13:01:06.323501 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 27 13:01:06.325078 systemd[1]: Reached target sysinit.target - System Initialization. Jan 27 13:01:06.326434 systemd[1]: Reached target basic.target - Basic System. Jan 27 13:01:06.329256 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 27 13:01:06.374580 systemd-fsck[904]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 27 13:01:06.379184 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 27 13:01:06.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.381391 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 27 13:01:06.519732 kernel: EXT4-fs (vda9): mounted filesystem f82d5d40-607d-4567-b2c1-7e3e0fab898a r/w with ordered data mode. Quota mode: none. Jan 27 13:01:06.520292 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 27 13:01:06.521633 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 27 13:01:06.525071 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 27 13:01:06.527809 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 27 13:01:06.528932 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 27 13:01:06.530874 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 27 13:01:06.533072 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 27 13:01:06.533116 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 27 13:01:06.536605 systemd-networkd[703]: eth0: Gained IPv6LL Jan 27 13:01:06.543530 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 27 13:01:06.546880 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 27 13:01:06.558737 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (912) Jan 27 13:01:06.563279 kernel: BTRFS info (device vda6): first mount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 13:01:06.565933 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 13:01:06.599720 kernel: BTRFS info (device vda6): turning on async discard Jan 27 13:01:06.599801 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 13:01:06.621587 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 27 13:01:06.639724 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:06.774785 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 27 13:01:06.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.777591 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 27 13:01:06.779376 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 27 13:01:06.804276 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 27 13:01:06.807922 kernel: BTRFS info (device vda6): last unmount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 13:01:06.830642 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 27 13:01:06.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:06.852725 ignition[1013]: INFO : Ignition 2.24.0 Jan 27 13:01:06.852725 ignition[1013]: INFO : Stage: mount Jan 27 13:01:06.854834 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 13:01:06.854834 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 13:01:06.857527 ignition[1013]: INFO : mount: mount passed Jan 27 13:01:06.857527 ignition[1013]: INFO : Ignition finished successfully Jan 27 13:01:06.860294 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 27 13:01:06.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:07.673733 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:07.880444 systemd-networkd[703]: eth0: Ignoring DHCPv6 address 2a02:1348:179:90af:24:19ff:fee6:42be/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:90af:24:19ff:fee6:42be/64 assigned by NDisc. Jan 27 13:01:07.880461 systemd-networkd[703]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 27 13:01:09.686728 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:13.694737 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:13.704248 coreos-metadata[914]: Jan 27 13:01:13.704 WARN failed to locate config-drive, using the metadata service API instead Jan 27 13:01:13.727930 coreos-metadata[914]: Jan 27 13:01:13.727 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 27 13:01:13.745515 coreos-metadata[914]: Jan 27 13:01:13.745 INFO Fetch successful Jan 27 13:01:13.746396 coreos-metadata[914]: Jan 27 13:01:13.746 INFO wrote hostname srv-4nwk8.gb1.brightbox.com to /sysroot/etc/hostname Jan 27 13:01:13.748711 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 27 13:01:13.748890 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 27 13:01:13.763932 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 27 13:01:13.763969 kernel: audit: type=1130 audit(1769518873.750:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:13.763990 kernel: audit: type=1131 audit(1769518873.750:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:13.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:13.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:13.752404 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 27 13:01:13.791314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 27 13:01:13.816750 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Jan 27 13:01:13.820146 kernel: BTRFS info (device vda6): first mount of filesystem 9734ba71-0bae-447a-acd4-ca25b06d0b18 Jan 27 13:01:13.820222 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 27 13:01:13.827105 kernel: BTRFS info (device vda6): turning on async discard Jan 27 13:01:13.827159 kernel: BTRFS info (device vda6): enabling free space tree Jan 27 13:01:13.829901 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 27 13:01:13.866673 ignition[1047]: INFO : Ignition 2.24.0 Jan 27 13:01:13.866673 ignition[1047]: INFO : Stage: files Jan 27 13:01:13.868397 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 13:01:13.868397 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 13:01:13.868397 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 27 13:01:13.871654 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 27 13:01:13.871654 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 27 13:01:13.876313 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 27 13:01:13.877497 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 27 13:01:13.878618 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 27 13:01:13.878532 unknown[1047]: wrote ssh authorized keys file for user: core Jan 27 13:01:13.880525 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 27 13:01:13.880525 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 27 13:01:14.079009 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 27 13:01:14.352579 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 27 13:01:14.352579 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 27 13:01:14.358109 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 27 13:01:14.358109 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 27 13:01:14.358109 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 27 13:01:14.358109 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 27 13:01:14.358109 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 27 13:01:14.358109 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 27 13:01:14.358109 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 27 13:01:14.365587 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 27 13:01:14.365587 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 27 13:01:14.365587 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 27 13:01:14.369452 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 27 13:01:14.369452 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 27 13:01:14.369452 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 27 13:01:14.862582 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 27 13:01:16.925827 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 27 13:01:16.925827 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 27 13:01:16.930608 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 27 13:01:16.932701 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 27 13:01:16.932701 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 27 13:01:16.937270 ignition[1047]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 27 13:01:16.937270 ignition[1047]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 27 13:01:16.937270 ignition[1047]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 27 13:01:16.937270 ignition[1047]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 27 13:01:16.937270 ignition[1047]: INFO : files: files passed Jan 27 13:01:16.937270 ignition[1047]: INFO : Ignition finished successfully Jan 27 13:01:16.953467 kernel: audit: type=1130 audit(1769518876.940:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:16.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:16.937419 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 27 13:01:16.945476 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 27 13:01:16.956236 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 27 13:01:16.966849 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 27 13:01:16.967117 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 27 13:01:16.979733 kernel: audit: type=1130 audit(1769518876.968:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:16.979843 kernel: audit: type=1131 audit(1769518876.968:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:16.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:16.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:16.999675 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 27 13:01:17.001612 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 27 13:01:17.003290 initrd-setup-root-after-ignition[1078]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 27 13:01:17.009967 kernel: audit: type=1130 audit(1769518877.003:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.002827 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 27 13:01:17.004955 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 27 13:01:17.012910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 27 13:01:17.078136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 27 13:01:17.079374 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 27 13:01:17.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.081734 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 27 13:01:17.093394 kernel: audit: type=1130 audit(1769518877.080:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.093441 kernel: audit: type=1131 audit(1769518877.080:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.092142 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 27 13:01:17.093977 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 27 13:01:17.095851 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 27 13:01:17.135602 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 27 13:01:17.142049 kernel: audit: type=1130 audit(1769518877.135:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.139937 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 27 13:01:17.184631 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 27 13:01:17.186236 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 27 13:01:17.187185 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 13:01:17.188736 systemd[1]: Stopped target timers.target - Timer Units. Jan 27 13:01:17.190240 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 27 13:01:17.196740 kernel: audit: type=1131 audit(1769518877.190:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.190528 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 27 13:01:17.196728 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 27 13:01:17.197680 systemd[1]: Stopped target basic.target - Basic System. Jan 27 13:01:17.199059 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 27 13:01:17.200454 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 27 13:01:17.201854 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 27 13:01:17.203581 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 27 13:01:17.205178 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 27 13:01:17.206640 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 27 13:01:17.208272 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 27 13:01:17.209725 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 27 13:01:17.211269 systemd[1]: Stopped target swap.target - Swaps. Jan 27 13:01:17.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.212799 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 27 13:01:17.213255 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 27 13:01:17.214802 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 27 13:01:17.215814 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 13:01:17.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.217139 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 27 13:01:17.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.217567 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 13:01:17.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.218776 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 27 13:01:17.219055 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 27 13:01:17.220983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 27 13:01:17.221167 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 27 13:01:17.222231 systemd[1]: ignition-files.service: Deactivated successfully. Jan 27 13:01:17.222505 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 27 13:01:17.226025 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 27 13:01:17.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.230066 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 27 13:01:17.231705 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 27 13:01:17.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.232086 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 13:01:17.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.234415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 27 13:01:17.235499 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 13:01:17.237829 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 27 13:01:17.238035 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 27 13:01:17.250231 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 27 13:01:17.250400 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 27 13:01:17.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.281423 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 27 13:01:17.284745 ignition[1102]: INFO : Ignition 2.24.0 Jan 27 13:01:17.284745 ignition[1102]: INFO : Stage: umount Jan 27 13:01:17.287554 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 27 13:01:17.287554 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 27 13:01:17.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.288936 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 27 13:01:17.289170 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 27 13:01:17.294745 ignition[1102]: INFO : umount: umount passed Jan 27 13:01:17.294745 ignition[1102]: INFO : Ignition finished successfully Jan 27 13:01:17.295511 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 27 13:01:17.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.295789 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 27 13:01:17.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.297133 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 27 13:01:17.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.297276 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 27 13:01:17.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.298113 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 27 13:01:17.298198 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 27 13:01:17.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.299478 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 27 13:01:17.299559 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 27 13:01:17.300793 systemd[1]: Stopped target network.target - Network. Jan 27 13:01:17.301924 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 27 13:01:17.302005 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 27 13:01:17.303373 systemd[1]: Stopped target paths.target - Path Units. Jan 27 13:01:17.304587 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 27 13:01:17.304999 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 13:01:17.305989 systemd[1]: Stopped target slices.target - Slice Units. Jan 27 13:01:17.308984 systemd[1]: Stopped target sockets.target - Socket Units. Jan 27 13:01:17.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.310476 systemd[1]: iscsid.socket: Deactivated successfully. Jan 27 13:01:17.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.310562 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 27 13:01:17.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.311678 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 27 13:01:17.311776 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 27 13:01:17.313068 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 27 13:01:17.313122 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 27 13:01:17.314651 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 27 13:01:17.314782 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 27 13:01:17.316080 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 27 13:01:17.316256 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 27 13:01:17.317323 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 27 13:01:17.317398 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 27 13:01:17.319060 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 27 13:01:17.321285 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 27 13:01:17.334330 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 27 13:01:17.335299 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 27 13:01:17.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.342000 audit: BPF prog-id=6 op=UNLOAD Jan 27 13:01:17.338512 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 27 13:01:17.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.338786 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 27 13:01:17.347000 audit: BPF prog-id=9 op=UNLOAD Jan 27 13:01:17.348502 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 27 13:01:17.350261 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 27 13:01:17.350368 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 27 13:01:17.353126 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 27 13:01:17.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.353914 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 27 13:01:17.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.354000 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 27 13:01:17.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.354848 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 27 13:01:17.354916 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 27 13:01:17.356294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 27 13:01:17.356362 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 27 13:01:17.357856 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 13:01:17.368006 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 27 13:01:17.370332 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 13:01:17.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.374898 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 27 13:01:17.374995 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 27 13:01:17.377189 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 27 13:01:17.377248 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 13:01:17.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.377910 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 27 13:01:17.377987 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 27 13:01:17.381038 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 27 13:01:17.381119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 27 13:01:17.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.384957 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 27 13:01:17.385082 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 27 13:01:17.388481 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 27 13:01:17.391045 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 27 13:01:17.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.391168 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 13:01:17.392663 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 27 13:01:17.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.393816 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 13:01:17.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.395345 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 27 13:01:17.395457 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 27 13:01:17.396226 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 27 13:01:17.396293 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 13:01:17.397104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 27 13:01:17.397227 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 13:01:17.420005 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 27 13:01:17.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.420240 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 27 13:01:17.423109 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 27 13:01:17.423327 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 27 13:01:17.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:17.425557 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 27 13:01:17.428488 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 27 13:01:17.457259 systemd[1]: Switching root. Jan 27 13:01:17.502107 systemd-journald[331]: Journal stopped Jan 27 13:01:19.141376 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Jan 27 13:01:19.142002 kernel: SELinux: policy capability network_peer_controls=1 Jan 27 13:01:19.142144 kernel: SELinux: policy capability open_perms=1 Jan 27 13:01:19.142189 kernel: SELinux: policy capability extended_socket_class=1 Jan 27 13:01:19.142234 kernel: SELinux: policy capability always_check_network=0 Jan 27 13:01:19.142334 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 27 13:01:19.142399 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 27 13:01:19.142442 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 27 13:01:19.142483 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 27 13:01:19.142566 kernel: SELinux: policy capability userspace_initial_context=0 Jan 27 13:01:19.142626 systemd[1]: Successfully loaded SELinux policy in 95.767ms. Jan 27 13:01:19.142837 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.711ms. Jan 27 13:01:19.142955 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 27 13:01:19.143023 systemd[1]: Detected virtualization kvm. Jan 27 13:01:19.143074 systemd[1]: Detected architecture x86-64. Jan 27 13:01:19.143130 systemd[1]: Detected first boot. Jan 27 13:01:19.143164 systemd[1]: Hostname set to . Jan 27 13:01:19.143227 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 27 13:01:19.143279 kernel: Guest personality initialized and is inactive Jan 27 13:01:19.143356 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 27 13:01:19.143407 kernel: Initialized host personality Jan 27 13:01:19.143435 zram_generator::config[1149]: No configuration found. Jan 27 13:01:19.143601 kernel: NET: Registered PF_VSOCK protocol family Jan 27 13:01:19.143653 systemd[1]: Populated /etc with preset unit settings. Jan 27 13:01:19.143721 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 27 13:01:19.143791 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 27 13:01:19.143838 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 27 13:01:19.143906 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 27 13:01:19.143957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 27 13:01:19.143989 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 27 13:01:19.144043 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 27 13:01:19.144097 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 27 13:01:19.144185 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 27 13:01:19.144229 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 27 13:01:19.144260 systemd[1]: Created slice user.slice - User and Session Slice. Jan 27 13:01:19.144303 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 27 13:01:19.144327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 27 13:01:19.144358 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 27 13:01:19.144432 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 27 13:01:19.144488 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 27 13:01:19.144521 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 27 13:01:19.144585 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 27 13:01:19.144633 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 27 13:01:19.144713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 27 13:01:19.144737 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 27 13:01:19.144781 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 27 13:01:19.144803 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 27 13:01:19.144856 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 27 13:01:19.144899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 27 13:01:19.144946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 27 13:01:19.145000 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 27 13:01:19.145060 systemd[1]: Reached target slices.target - Slice Units. Jan 27 13:01:19.145120 systemd[1]: Reached target swap.target - Swaps. Jan 27 13:01:19.145146 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 27 13:01:19.145196 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 27 13:01:19.145220 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 27 13:01:19.145260 kernel: kauditd_printk_skb: 53 callbacks suppressed Jan 27 13:01:19.145302 kernel: audit: type=1335 audit(1769518878.805:101): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 27 13:01:19.145349 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 27 13:01:19.146755 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 27 13:01:19.146828 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 27 13:01:19.146882 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 27 13:01:19.146935 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 27 13:01:19.146985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 27 13:01:19.147009 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 27 13:01:19.147068 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 27 13:01:19.147118 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 27 13:01:19.147143 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 27 13:01:19.147164 systemd[1]: Mounting media.mount - External Media Directory... Jan 27 13:01:19.147208 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 13:01:19.147232 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 27 13:01:19.147276 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 27 13:01:19.147330 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 27 13:01:19.147354 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 27 13:01:19.147375 systemd[1]: Reached target machines.target - Containers. Jan 27 13:01:19.147395 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 27 13:01:19.147416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 13:01:19.147436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 27 13:01:19.147510 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 27 13:01:19.147552 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 13:01:19.147576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 27 13:01:19.147606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 13:01:19.147628 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 27 13:01:19.147670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 13:01:19.147723 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 27 13:01:19.147779 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 27 13:01:19.147812 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 27 13:01:19.147862 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 27 13:01:19.147904 kernel: audit: type=1131 audit(1769518879.000:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.147945 systemd[1]: Stopped systemd-fsck-usr.service. Jan 27 13:01:19.147976 kernel: audit: type=1131 audit(1769518879.011:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.148019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 13:01:19.148042 kernel: audit: type=1334 audit(1769518879.025:104): prog-id=14 op=UNLOAD Jan 27 13:01:19.148100 kernel: audit: type=1334 audit(1769518879.025:105): prog-id=13 op=UNLOAD Jan 27 13:01:19.148171 kernel: audit: type=1334 audit(1769518879.027:106): prog-id=15 op=LOAD Jan 27 13:01:19.148228 kernel: audit: type=1334 audit(1769518879.029:107): prog-id=16 op=LOAD Jan 27 13:01:19.148250 kernel: audit: type=1334 audit(1769518879.029:108): prog-id=17 op=LOAD Jan 27 13:01:19.148287 kernel: fuse: init (API version 7.41) Jan 27 13:01:19.148335 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 27 13:01:19.148386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 27 13:01:19.148409 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 27 13:01:19.148448 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 27 13:01:19.148487 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 27 13:01:19.148510 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 27 13:01:19.148532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 13:01:19.148573 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 27 13:01:19.148596 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 27 13:01:19.148640 systemd[1]: Mounted media.mount - External Media Directory. Jan 27 13:01:19.148681 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 27 13:01:19.149117 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 27 13:01:19.149170 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 27 13:01:19.149278 systemd-journald[1228]: Collecting audit messages is enabled. Jan 27 13:01:19.149445 kernel: audit: type=1305 audit(1769518879.127:109): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 27 13:01:19.149490 kernel: audit: type=1300 audit(1769518879.127:109): arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffdea48f7f0 a2=4000 a3=0 items=0 ppid=1 pid=1228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:19.149514 systemd-journald[1228]: Journal started Jan 27 13:01:19.149563 systemd-journald[1228]: Runtime Journal (/run/log/journal/291f86d398f54868b3987dd0f033571b) is 4.7M, max 37.7M, 33M free. Jan 27 13:01:18.805000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 27 13:01:19.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.025000 audit: BPF prog-id=14 op=UNLOAD Jan 27 13:01:19.025000 audit: BPF prog-id=13 op=UNLOAD Jan 27 13:01:19.027000 audit: BPF prog-id=15 op=LOAD Jan 27 13:01:19.029000 audit: BPF prog-id=16 op=LOAD Jan 27 13:01:19.029000 audit: BPF prog-id=17 op=LOAD Jan 27 13:01:19.127000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 27 13:01:19.127000 audit[1228]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffdea48f7f0 a2=4000 a3=0 items=0 ppid=1 pid=1228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:19.127000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 27 13:01:18.690066 systemd[1]: Queued start job for default target multi-user.target. Jan 27 13:01:18.701624 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 27 13:01:18.702608 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 27 13:01:19.159716 systemd[1]: Started systemd-journald.service - Journal Service. Jan 27 13:01:19.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.166342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 27 13:01:19.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.168054 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 27 13:01:19.168886 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 27 13:01:19.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.171290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 13:01:19.171570 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 13:01:19.172789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 13:01:19.173084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 13:01:19.175213 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 27 13:01:19.175549 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 27 13:01:19.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.176720 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 13:01:19.177549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 13:01:19.179571 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 27 13:01:19.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.181811 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 27 13:01:19.210178 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 27 13:01:19.216854 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 27 13:01:19.222823 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 27 13:01:19.223707 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 27 13:01:19.223761 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 27 13:01:19.228591 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 27 13:01:19.238477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 13:01:19.238765 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 13:01:19.246032 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 27 13:01:19.255895 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 27 13:01:19.258072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 27 13:01:19.263166 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 27 13:01:19.264864 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 27 13:01:19.277046 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 27 13:01:19.284903 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 27 13:01:19.310064 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 27 13:01:19.314725 kernel: ACPI: bus type drm_connector registered Jan 27 13:01:19.316996 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 27 13:01:19.318796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 27 13:01:19.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.328126 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 27 13:01:19.329015 systemd-journald[1228]: Time spent on flushing to /var/log/journal/291f86d398f54868b3987dd0f033571b is 155.112ms for 1298 entries. Jan 27 13:01:19.329015 systemd-journald[1228]: System Journal (/var/log/journal/291f86d398f54868b3987dd0f033571b) is 8M, max 588.1M, 580.1M free. Jan 27 13:01:19.495352 systemd-journald[1228]: Received client request to flush runtime journal. Jan 27 13:01:19.495414 kernel: loop1: detected capacity change from 0 to 111560 Jan 27 13:01:19.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.336040 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 27 13:01:19.340284 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 27 13:01:19.342368 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 27 13:01:19.356977 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 27 13:01:19.374870 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 27 13:01:19.377066 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 27 13:01:19.383450 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 27 13:01:19.444088 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 27 13:01:19.449571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 27 13:01:19.478547 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 27 13:01:19.491752 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jan 27 13:01:19.491784 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jan 27 13:01:19.498042 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 27 13:01:19.522408 kernel: loop2: detected capacity change from 0 to 224512 Jan 27 13:01:19.523397 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 27 13:01:19.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.532358 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 27 13:01:19.571721 kernel: loop3: detected capacity change from 0 to 50784 Jan 27 13:01:19.615908 kernel: loop4: detected capacity change from 0 to 8 Jan 27 13:01:19.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.640000 audit: BPF prog-id=18 op=LOAD Jan 27 13:01:19.641000 audit: BPF prog-id=19 op=LOAD Jan 27 13:01:19.641000 audit: BPF prog-id=20 op=LOAD Jan 27 13:01:19.638168 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 27 13:01:19.645149 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 27 13:01:19.647000 audit: BPF prog-id=21 op=LOAD Jan 27 13:01:19.654749 kernel: loop5: detected capacity change from 0 to 111560 Jan 27 13:01:19.659397 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 27 13:01:19.667180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 27 13:01:19.692731 kernel: loop6: detected capacity change from 0 to 224512 Jan 27 13:01:19.706249 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 27 13:01:19.778761 kernel: loop7: detected capacity change from 0 to 50784 Jan 27 13:01:19.759000 audit: BPF prog-id=22 op=LOAD Jan 27 13:01:19.759000 audit: BPF prog-id=23 op=LOAD Jan 27 13:01:19.759000 audit: BPF prog-id=24 op=LOAD Jan 27 13:01:19.764000 audit: BPF prog-id=25 op=LOAD Jan 27 13:01:19.764000 audit: BPF prog-id=26 op=LOAD Jan 27 13:01:19.764000 audit: BPF prog-id=27 op=LOAD Jan 27 13:01:19.762310 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 27 13:01:19.769441 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 27 13:01:19.795960 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 27 13:01:19.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.804773 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Jan 27 13:01:19.804808 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Jan 27 13:01:19.819913 kernel: loop1: detected capacity change from 0 to 8 Jan 27 13:01:19.819090 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 27 13:01:19.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:19.824229 (sd-merge)[1308]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Jan 27 13:01:19.834286 (sd-merge)[1308]: Merged extensions into '/usr'. Jan 27 13:01:19.848183 systemd[1]: Reload requested from client PID 1273 ('systemd-sysext') (unit systemd-sysext.service)... Jan 27 13:01:19.848324 systemd[1]: Reloading... Jan 27 13:01:19.916179 systemd-nsresourced[1312]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 27 13:01:20.027738 zram_generator::config[1355]: No configuration found. Jan 27 13:01:20.187400 systemd-oomd[1309]: No swap; memory pressure usage will be degraded Jan 27 13:01:20.201910 systemd-resolved[1310]: Positive Trust Anchors: Jan 27 13:01:20.202706 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 27 13:01:20.202721 systemd-resolved[1310]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 27 13:01:20.202765 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 27 13:01:20.230304 systemd-resolved[1310]: Using system hostname 'srv-4nwk8.gb1.brightbox.com'. Jan 27 13:01:20.429202 systemd[1]: Reloading finished in 579 ms. Jan 27 13:01:20.446758 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 27 13:01:20.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:20.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:20.452586 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 27 13:01:20.454326 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 27 13:01:20.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:20.455420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 27 13:01:20.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:20.457976 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 27 13:01:20.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:20.465184 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 27 13:01:20.476048 systemd[1]: Starting ensure-sysext.service... Jan 27 13:01:20.481980 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 27 13:01:20.485000 audit: BPF prog-id=28 op=LOAD Jan 27 13:01:20.485000 audit: BPF prog-id=15 op=UNLOAD Jan 27 13:01:20.487000 audit: BPF prog-id=29 op=LOAD Jan 27 13:01:20.487000 audit: BPF prog-id=30 op=LOAD Jan 27 13:01:20.487000 audit: BPF prog-id=16 op=UNLOAD Jan 27 13:01:20.487000 audit: BPF prog-id=17 op=UNLOAD Jan 27 13:01:20.489000 audit: BPF prog-id=31 op=LOAD Jan 27 13:01:20.489000 audit: BPF prog-id=25 op=UNLOAD Jan 27 13:01:20.489000 audit: BPF prog-id=32 op=LOAD Jan 27 13:01:20.489000 audit: BPF prog-id=33 op=LOAD Jan 27 13:01:20.489000 audit: BPF prog-id=26 op=UNLOAD Jan 27 13:01:20.489000 audit: BPF prog-id=27 op=UNLOAD Jan 27 13:01:20.490000 audit: BPF prog-id=34 op=LOAD Jan 27 13:01:20.490000 audit: BPF prog-id=22 op=UNLOAD Jan 27 13:01:20.490000 audit: BPF prog-id=35 op=LOAD Jan 27 13:01:20.490000 audit: BPF prog-id=36 op=LOAD Jan 27 13:01:20.490000 audit: BPF prog-id=23 op=UNLOAD Jan 27 13:01:20.490000 audit: BPF prog-id=24 op=UNLOAD Jan 27 13:01:20.493000 audit: BPF prog-id=37 op=LOAD Jan 27 13:01:20.493000 audit: BPF prog-id=18 op=UNLOAD Jan 27 13:01:20.493000 audit: BPF prog-id=38 op=LOAD Jan 27 13:01:20.493000 audit: BPF prog-id=39 op=LOAD Jan 27 13:01:20.493000 audit: BPF prog-id=19 op=UNLOAD Jan 27 13:01:20.493000 audit: BPF prog-id=20 op=UNLOAD Jan 27 13:01:20.498000 audit: BPF prog-id=40 op=LOAD Jan 27 13:01:20.498000 audit: BPF prog-id=21 op=UNLOAD Jan 27 13:01:20.541054 systemd[1]: Reload requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Jan 27 13:01:20.541093 systemd[1]: Reloading... Jan 27 13:01:20.578790 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 27 13:01:20.581383 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 27 13:01:20.582051 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 27 13:01:20.584724 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 27 13:01:20.584926 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Jan 27 13:01:20.603440 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Jan 27 13:01:20.603480 systemd-tmpfiles[1414]: Skipping /boot Jan 27 13:01:20.650018 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Jan 27 13:01:20.650064 systemd-tmpfiles[1414]: Skipping /boot Jan 27 13:01:20.698748 zram_generator::config[1449]: No configuration found. Jan 27 13:01:20.988242 systemd[1]: Reloading finished in 446 ms. Jan 27 13:01:21.013927 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 27 13:01:21.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.019000 audit: BPF prog-id=41 op=LOAD Jan 27 13:01:21.019000 audit: BPF prog-id=28 op=UNLOAD Jan 27 13:01:21.019000 audit: BPF prog-id=42 op=LOAD Jan 27 13:01:21.019000 audit: BPF prog-id=43 op=LOAD Jan 27 13:01:21.019000 audit: BPF prog-id=29 op=UNLOAD Jan 27 13:01:21.019000 audit: BPF prog-id=30 op=UNLOAD Jan 27 13:01:21.020000 audit: BPF prog-id=44 op=LOAD Jan 27 13:01:21.020000 audit: BPF prog-id=37 op=UNLOAD Jan 27 13:01:21.020000 audit: BPF prog-id=45 op=LOAD Jan 27 13:01:21.020000 audit: BPF prog-id=46 op=LOAD Jan 27 13:01:21.020000 audit: BPF prog-id=38 op=UNLOAD Jan 27 13:01:21.020000 audit: BPF prog-id=39 op=UNLOAD Jan 27 13:01:21.021000 audit: BPF prog-id=47 op=LOAD Jan 27 13:01:21.021000 audit: BPF prog-id=34 op=UNLOAD Jan 27 13:01:21.021000 audit: BPF prog-id=48 op=LOAD Jan 27 13:01:21.021000 audit: BPF prog-id=49 op=LOAD Jan 27 13:01:21.022000 audit: BPF prog-id=35 op=UNLOAD Jan 27 13:01:21.022000 audit: BPF prog-id=36 op=UNLOAD Jan 27 13:01:21.024000 audit: BPF prog-id=50 op=LOAD Jan 27 13:01:21.024000 audit: BPF prog-id=31 op=UNLOAD Jan 27 13:01:21.024000 audit: BPF prog-id=51 op=LOAD Jan 27 13:01:21.024000 audit: BPF prog-id=52 op=LOAD Jan 27 13:01:21.024000 audit: BPF prog-id=32 op=UNLOAD Jan 27 13:01:21.024000 audit: BPF prog-id=33 op=UNLOAD Jan 27 13:01:21.026000 audit: BPF prog-id=53 op=LOAD Jan 27 13:01:21.026000 audit: BPF prog-id=40 op=UNLOAD Jan 27 13:01:21.035933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 27 13:01:21.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.053256 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 27 13:01:21.058374 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 27 13:01:21.068844 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 27 13:01:21.074190 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 27 13:01:21.074000 audit: BPF prog-id=8 op=UNLOAD Jan 27 13:01:21.074000 audit: BPF prog-id=7 op=UNLOAD Jan 27 13:01:21.077000 audit: BPF prog-id=54 op=LOAD Jan 27 13:01:21.079000 audit: BPF prog-id=55 op=LOAD Jan 27 13:01:21.083310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 27 13:01:21.091066 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 27 13:01:21.099307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 13:01:21.099643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 13:01:21.105037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 27 13:01:21.116072 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 27 13:01:21.120656 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 27 13:01:21.122968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 13:01:21.123323 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 13:01:21.123520 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 13:01:21.128838 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 13:01:21.135528 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 13:01:21.136511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 13:01:21.137861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 13:01:21.138124 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 13:01:21.138261 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 13:01:21.138396 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 13:01:21.147613 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 13:01:21.147977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 27 13:01:21.161658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 27 13:01:21.163853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 27 13:01:21.164131 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 27 13:01:21.164278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 27 13:01:21.164467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 27 13:01:21.173579 systemd[1]: Finished ensure-sysext.service. Jan 27 13:01:21.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.176871 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 27 13:01:21.185000 audit: BPF prog-id=56 op=LOAD Jan 27 13:01:21.190797 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 27 13:01:21.218000 audit[1515]: SYSTEM_BOOT pid=1515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.224142 systemd-udevd[1512]: Using default interface naming scheme 'v257'. Jan 27 13:01:21.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.232283 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 27 13:01:21.271311 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 27 13:01:21.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.275000 audit: BPF prog-id=57 op=LOAD Jan 27 13:01:21.279842 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 27 13:01:21.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.364823 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 27 13:01:21.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.366128 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 27 13:01:21.369298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 27 13:01:21.370835 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 27 13:01:21.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:21.376686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 27 13:01:21.379481 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 27 13:01:21.384341 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 27 13:01:21.385809 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 27 13:01:21.398111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 27 13:01:21.398221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 27 13:01:21.412000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 27 13:01:21.412000 audit[1558]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff4cf54b60 a2=420 a3=0 items=0 ppid=1504 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:21.412000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 13:01:21.414771 augenrules[1558]: No rules Jan 27 13:01:21.419580 systemd[1]: audit-rules.service: Deactivated successfully. Jan 27 13:01:21.421251 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 27 13:01:21.506510 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 27 13:01:21.509425 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 27 13:01:21.610887 systemd-networkd[1541]: lo: Link UP Jan 27 13:01:21.613363 systemd-networkd[1541]: lo: Gained carrier Jan 27 13:01:21.616062 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 27 13:01:21.617741 systemd[1]: Reached target time-set.target - System Time Set. Jan 27 13:01:21.632361 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 27 13:01:21.633499 systemd[1]: Reached target network.target - Network. Jan 27 13:01:21.641750 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 27 13:01:21.647145 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 27 13:01:21.739423 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 27 13:01:21.750313 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 27 13:01:21.906766 kernel: mousedev: PS/2 mouse device common for all mice Jan 27 13:01:21.927747 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 27 13:01:21.965252 systemd-networkd[1541]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 13:01:21.965269 systemd-networkd[1541]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 27 13:01:21.968764 kernel: ACPI: button: Power Button [PWRF] Jan 27 13:01:21.970418 systemd-networkd[1541]: eth0: Link UP Jan 27 13:01:21.970729 systemd-networkd[1541]: eth0: Gained carrier Jan 27 13:01:21.970753 systemd-networkd[1541]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 27 13:01:21.996232 systemd-networkd[1541]: eth0: DHCPv4 address 10.230.66.190/30, gateway 10.230.66.189 acquired from 10.230.66.189 Jan 27 13:01:22.003954 systemd-timesyncd[1529]: Network configuration changed, trying to establish connection. Jan 27 13:01:22.022225 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 27 13:01:22.033404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 27 13:01:22.105564 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 27 13:01:22.123214 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 27 13:01:22.124452 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 27 13:01:22.187819 ldconfig[1506]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 27 13:01:22.198389 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 27 13:01:22.207003 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 27 13:01:22.251376 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 27 13:01:22.252629 systemd[1]: Reached target sysinit.target - System Initialization. Jan 27 13:01:22.253525 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 27 13:01:22.254354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 27 13:01:22.255141 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 27 13:01:22.256333 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 27 13:01:22.257201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 27 13:01:22.257967 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 27 13:01:22.258846 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 27 13:01:22.259805 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 27 13:01:22.260562 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 27 13:01:22.260624 systemd[1]: Reached target paths.target - Path Units. Jan 27 13:01:22.261296 systemd[1]: Reached target timers.target - Timer Units. Jan 27 13:01:22.265406 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 27 13:01:22.268279 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 27 13:01:22.273684 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 27 13:01:22.276645 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 27 13:01:22.277441 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 27 13:01:22.287880 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 27 13:01:22.289096 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 27 13:01:22.291636 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 27 13:01:22.306534 systemd[1]: Reached target sockets.target - Socket Units. Jan 27 13:01:22.308804 systemd[1]: Reached target basic.target - Basic System. Jan 27 13:01:22.309731 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 27 13:01:22.309805 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 27 13:01:22.315227 systemd[1]: Starting containerd.service - containerd container runtime... Jan 27 13:01:22.323266 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 27 13:01:22.328327 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 27 13:01:22.332430 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 27 13:01:22.346037 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 27 13:01:22.351109 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 27 13:01:22.351874 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 27 13:01:22.357330 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 27 13:01:22.365728 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 27 13:01:22.373222 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 27 13:01:22.382751 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:22.385072 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 27 13:01:22.395216 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 27 13:01:22.411375 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 27 13:01:22.412163 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 27 13:01:22.413240 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 27 13:01:22.417668 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Refreshing passwd entry cache Jan 27 13:01:22.418101 systemd[1]: Starting update-engine.service - Update Engine... Jan 27 13:01:22.419547 oslogin_cache_refresh[1608]: Refreshing passwd entry cache Jan 27 13:01:23.190858 systemd-resolved[1310]: Clock change detected. Flushing caches. Jan 27 13:01:23.191742 systemd-timesyncd[1529]: Contacted time server 109.74.192.36:123 (3.flatcar.pool.ntp.org). Jan 27 13:01:23.193293 systemd-timesyncd[1529]: Initial clock synchronization to Tue 2026-01-27 13:01:23.190729 UTC. Jan 27 13:01:23.202576 oslogin_cache_refresh[1608]: Failure getting users, quitting Jan 27 13:01:23.204856 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Failure getting users, quitting Jan 27 13:01:23.204856 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 27 13:01:23.204856 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Refreshing group entry cache Jan 27 13:01:23.199942 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 27 13:01:23.202627 oslogin_cache_refresh[1608]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 27 13:01:23.202797 oslogin_cache_refresh[1608]: Refreshing group entry cache Jan 27 13:01:23.208617 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Failure getting groups, quitting Jan 27 13:01:23.208617 google_oslogin_nss_cache[1608]: oslogin_cache_refresh[1608]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 27 13:01:23.206665 oslogin_cache_refresh[1608]: Failure getting groups, quitting Jan 27 13:01:23.206683 oslogin_cache_refresh[1608]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 27 13:01:23.219779 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 27 13:01:23.221373 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 27 13:01:23.223842 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 27 13:01:23.226225 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 27 13:01:23.228595 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 27 13:01:23.255564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 27 13:01:23.285214 extend-filesystems[1607]: Found /dev/vda6 Jan 27 13:01:23.286932 jq[1606]: false Jan 27 13:01:23.287657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 27 13:01:23.292288 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 27 13:01:23.310123 extend-filesystems[1607]: Found /dev/vda9 Jan 27 13:01:23.327609 jq[1618]: true Jan 27 13:01:23.358343 extend-filesystems[1607]: Checking size of /dev/vda9 Jan 27 13:01:23.379074 update_engine[1617]: I20260127 13:01:23.376677 1617 main.cc:92] Flatcar Update Engine starting Jan 27 13:01:23.391228 jq[1640]: true Jan 27 13:01:23.469014 tar[1629]: linux-amd64/LICENSE Jan 27 13:01:23.479222 tar[1629]: linux-amd64/helm Jan 27 13:01:23.506022 extend-filesystems[1607]: Resized partition /dev/vda9 Jan 27 13:01:23.512008 dbus-daemon[1604]: [system] SELinux support is enabled Jan 27 13:01:23.512798 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 27 13:01:23.516835 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 27 13:01:23.523919 extend-filesystems[1655]: resize2fs 1.47.3 (8-Jul-2025) Jan 27 13:01:23.516908 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 27 13:01:23.517751 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 27 13:01:23.517804 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 27 13:01:23.543457 dbus-daemon[1604]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1541 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 27 13:01:23.543694 dbus-daemon[1604]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 27 13:01:23.557257 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 27 13:01:23.558548 systemd[1]: motdgen.service: Deactivated successfully. Jan 27 13:01:23.562544 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Jan 27 13:01:23.559635 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 27 13:01:23.570724 systemd[1]: Started update-engine.service - Update Engine. Jan 27 13:01:23.574418 update_engine[1617]: I20260127 13:01:23.574155 1617 update_check_scheduler.cc:74] Next update check in 8m30s Jan 27 13:01:23.574982 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 27 13:01:23.806351 bash[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 27 13:01:23.807350 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 27 13:01:23.871636 systemd-logind[1615]: Watching system buttons on /dev/input/event3 (Power Button) Jan 27 13:01:23.905111 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 27 13:01:23.871849 systemd-logind[1615]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 27 13:01:23.906186 extend-filesystems[1655]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 27 13:01:23.906186 extend-filesystems[1655]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 27 13:01:23.906186 extend-filesystems[1655]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 27 13:01:24.087814 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 27 13:01:24.038277 dbus-daemon[1604]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 27 13:01:24.247418 containerd[1636]: time="2026-01-27T13:01:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 27 13:01:24.247418 containerd[1636]: time="2026-01-27T13:01:24.206123427Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 27 13:01:24.247964 extend-filesystems[1607]: Resized filesystem in /dev/vda9 Jan 27 13:01:24.089223 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 27 13:01:24.042957 dbus-daemon[1604]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1658 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 27 13:01:24.278028 containerd[1636]: time="2026-01-27T13:01:24.272442790Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="50.541µs" Jan 27 13:01:24.090714 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 27 13:01:24.101378 systemd-logind[1615]: New seat seat0. Jan 27 13:01:24.194933 locksmithd[1661]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 27 13:01:24.242957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 27 13:01:24.251672 systemd[1]: Started systemd-logind.service - User Login Management. Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.272507936Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.280464537Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.280498061Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.280991298Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.281023567Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.281167036Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.281191737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.281494354Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.281520831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.289726189Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.289750378Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291227 containerd[1636]: time="2026-01-27T13:01:24.290310948Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291849 containerd[1636]: time="2026-01-27T13:01:24.290347022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291849 containerd[1636]: time="2026-01-27T13:01:24.290579808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291849 containerd[1636]: time="2026-01-27T13:01:24.291045810Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291849 containerd[1636]: time="2026-01-27T13:01:24.291097634Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 27 13:01:24.291849 containerd[1636]: time="2026-01-27T13:01:24.291116267Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 27 13:01:24.297197 containerd[1636]: time="2026-01-27T13:01:24.294286346Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 27 13:01:24.297197 containerd[1636]: time="2026-01-27T13:01:24.296892486Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 27 13:01:24.297197 containerd[1636]: time="2026-01-27T13:01:24.297131167Z" level=info msg="metadata content store policy set" policy=shared Jan 27 13:01:24.303865 containerd[1636]: time="2026-01-27T13:01:24.303768820Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 27 13:01:24.305961 containerd[1636]: time="2026-01-27T13:01:24.305889795Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 27 13:01:24.306582 containerd[1636]: time="2026-01-27T13:01:24.306509583Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 27 13:01:24.306948 containerd[1636]: time="2026-01-27T13:01:24.306918698Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 27 13:01:24.307077 containerd[1636]: time="2026-01-27T13:01:24.307051384Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 27 13:01:24.307204 containerd[1636]: time="2026-01-27T13:01:24.307178218Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 27 13:01:24.307428 containerd[1636]: time="2026-01-27T13:01:24.307400264Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 27 13:01:24.307556 containerd[1636]: time="2026-01-27T13:01:24.307529884Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 27 13:01:24.307901 containerd[1636]: time="2026-01-27T13:01:24.307852083Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 27 13:01:24.308008 containerd[1636]: time="2026-01-27T13:01:24.307983862Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 27 13:01:24.308265 containerd[1636]: time="2026-01-27T13:01:24.308239084Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 27 13:01:24.308372 containerd[1636]: time="2026-01-27T13:01:24.308349381Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.309475874Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.309516505Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.309815109Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.309883187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.309913816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.309948046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.309978926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.310000075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.310023499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.310041770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.310061317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.310079132Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.310096985Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.310136159Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 27 13:01:24.310439 containerd[1636]: time="2026-01-27T13:01:24.310215263Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 27 13:01:24.311103 containerd[1636]: time="2026-01-27T13:01:24.310239917Z" level=info msg="Start snapshots syncer" Jan 27 13:01:24.311529 containerd[1636]: time="2026-01-27T13:01:24.311454169Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 27 13:01:24.312382 containerd[1636]: time="2026-01-27T13:01:24.312308934Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 27 13:01:24.314151 containerd[1636]: time="2026-01-27T13:01:24.313757892Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 27 13:01:24.314282 containerd[1636]: time="2026-01-27T13:01:24.314254854Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.314835784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.314901686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.314924967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.314943919Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.314965086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.314984898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.315002462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.315023527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 27 13:01:24.316536 containerd[1636]: time="2026-01-27T13:01:24.315044085Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317157189Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317328143Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317354818Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317384290Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317398654Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317425255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317445214Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317502210Z" level=info msg="runtime interface created" Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317532817Z" level=info msg="created NRI interface" Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317559987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317590774Z" level=info msg="Connect containerd service" Jan 27 13:01:24.317702 containerd[1636]: time="2026-01-27T13:01:24.317634602Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 27 13:01:24.323819 containerd[1636]: time="2026-01-27T13:01:24.321368427Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 27 13:01:24.344176 systemd[1]: Starting polkit.service - Authorization Manager... Jan 27 13:01:24.349935 systemd[1]: Starting sshkeys.service... Jan 27 13:01:24.422727 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 27 13:01:24.431907 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 27 13:01:24.482181 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:24.709490 systemd-networkd[1541]: eth0: Gained IPv6LL Jan 27 13:01:24.717784 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 27 13:01:24.720508 systemd[1]: Reached target network-online.target - Network is Online. Jan 27 13:01:24.735893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:01:24.731828 polkitd[1696]: Started polkitd version 126 Jan 27 13:01:24.760435 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 27 13:01:24.789185 polkitd[1696]: Loading rules from directory /etc/polkit-1/rules.d Jan 27 13:01:24.789945 polkitd[1696]: Loading rules from directory /run/polkit-1/rules.d Jan 27 13:01:24.790055 polkitd[1696]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 27 13:01:24.791273 polkitd[1696]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 27 13:01:24.791381 polkitd[1696]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 27 13:01:24.793630 polkitd[1696]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 27 13:01:24.796750 polkitd[1696]: Finished loading, compiling and executing 2 rules Jan 27 13:01:24.802416 systemd[1]: Started polkit.service - Authorization Manager. Jan 27 13:01:24.808313 dbus-daemon[1604]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 27 13:01:24.809241 polkitd[1696]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 27 13:01:24.813575 containerd[1636]: time="2026-01-27T13:01:24.810401000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 27 13:01:24.817496 containerd[1636]: time="2026-01-27T13:01:24.813272196Z" level=info msg="Start subscribing containerd event" Jan 27 13:01:24.817496 containerd[1636]: time="2026-01-27T13:01:24.815805487Z" level=info msg="Start recovering state" Jan 27 13:01:24.817496 containerd[1636]: time="2026-01-27T13:01:24.816160364Z" level=info msg="Start event monitor" Jan 27 13:01:24.817496 containerd[1636]: time="2026-01-27T13:01:24.816234095Z" level=info msg="Start cni network conf syncer for default" Jan 27 13:01:24.817496 containerd[1636]: time="2026-01-27T13:01:24.816252280Z" level=info msg="Start streaming server" Jan 27 13:01:24.817496 containerd[1636]: time="2026-01-27T13:01:24.816272408Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 27 13:01:24.817496 containerd[1636]: time="2026-01-27T13:01:24.816286076Z" level=info msg="runtime interface starting up..." Jan 27 13:01:24.817496 containerd[1636]: time="2026-01-27T13:01:24.816296375Z" level=info msg="starting plugins..." Jan 27 13:01:24.835437 containerd[1636]: time="2026-01-27T13:01:24.816317777Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 27 13:01:24.835437 containerd[1636]: time="2026-01-27T13:01:24.833704681Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 27 13:01:24.835437 containerd[1636]: time="2026-01-27T13:01:24.833863844Z" level=info msg="containerd successfully booted in 0.638740s" Jan 27 13:01:24.834060 systemd[1]: Started containerd.service - containerd container runtime. Jan 27 13:01:24.861043 systemd-hostnamed[1658]: Hostname set to (static) Jan 27 13:01:24.906937 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 27 13:01:25.248363 sshd_keygen[1648]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 27 13:01:25.260554 tar[1629]: linux-amd64/README.md Jan 27 13:01:25.295296 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 27 13:01:25.306030 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 27 13:01:25.307642 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 27 13:01:25.345554 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:25.349411 systemd[1]: issuegen.service: Deactivated successfully. Jan 27 13:01:25.350066 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 27 13:01:25.357975 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 27 13:01:25.385476 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 27 13:01:25.391082 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 27 13:01:25.396001 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 27 13:01:25.398146 systemd[1]: Reached target getty.target - Login Prompts. Jan 27 13:01:25.578568 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:26.220705 systemd-networkd[1541]: eth0: Ignoring DHCPv6 address 2a02:1348:179:90af:24:19ff:fee6:42be/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:90af:24:19ff:fee6:42be/64 assigned by NDisc. Jan 27 13:01:26.220718 systemd-networkd[1541]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 27 13:01:26.560027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:01:26.578084 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 13:01:26.805353 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 27 13:01:26.810999 systemd[1]: Started sshd@0-10.230.66.190:22-68.220.241.50:54942.service - OpenSSH per-connection server daemon (68.220.241.50:54942). Jan 27 13:01:27.344231 kubelet[1758]: E0127 13:01:27.344147 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 13:01:27.347518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 13:01:27.347861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 13:01:27.349092 systemd[1]: kubelet.service: Consumed 1.629s CPU time, 263.5M memory peak. Jan 27 13:01:27.373695 sshd[1764]: Accepted publickey for core from 68.220.241.50 port 54942 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:27.376691 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:27.387375 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:27.410273 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 27 13:01:27.413797 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 27 13:01:27.421776 systemd-logind[1615]: New session 1 of user core. Jan 27 13:01:27.472480 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 27 13:01:27.477228 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 27 13:01:27.504101 (systemd)[1773]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:27.509569 systemd-logind[1615]: New session 2 of user core. Jan 27 13:01:27.596551 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:27.724078 systemd[1773]: Queued start job for default target default.target. Jan 27 13:01:27.733054 systemd[1773]: Created slice app.slice - User Application Slice. Jan 27 13:01:27.733133 systemd[1773]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 27 13:01:27.733159 systemd[1773]: Reached target paths.target - Paths. Jan 27 13:01:27.733258 systemd[1773]: Reached target timers.target - Timers. Jan 27 13:01:27.736004 systemd[1773]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 27 13:01:27.739778 systemd[1773]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 27 13:01:27.758913 systemd[1773]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 27 13:01:27.759003 systemd[1773]: Reached target sockets.target - Sockets. Jan 27 13:01:27.765029 systemd[1773]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 27 13:01:27.765252 systemd[1773]: Reached target basic.target - Basic System. Jan 27 13:01:27.765372 systemd[1773]: Reached target default.target - Main User Target. Jan 27 13:01:27.765467 systemd[1773]: Startup finished in 243ms. Jan 27 13:01:27.765839 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 27 13:01:27.777078 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 27 13:01:28.077841 systemd[1]: Started sshd@1-10.230.66.190:22-68.220.241.50:54944.service - OpenSSH per-connection server daemon (68.220.241.50:54944). Jan 27 13:01:28.609961 sshd[1788]: Accepted publickey for core from 68.220.241.50 port 54944 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:28.612362 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:28.627503 systemd-logind[1615]: New session 3 of user core. Jan 27 13:01:28.640390 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 27 13:01:28.901645 sshd[1792]: Connection closed by 68.220.241.50 port 54944 Jan 27 13:01:28.902709 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 27 13:01:28.909685 systemd[1]: sshd@1-10.230.66.190:22-68.220.241.50:54944.service: Deactivated successfully. Jan 27 13:01:28.912422 systemd[1]: session-3.scope: Deactivated successfully. Jan 27 13:01:28.914221 systemd-logind[1615]: Session 3 logged out. Waiting for processes to exit. Jan 27 13:01:28.916306 systemd-logind[1615]: Removed session 3. Jan 27 13:01:29.012553 systemd[1]: Started sshd@2-10.230.66.190:22-68.220.241.50:54954.service - OpenSSH per-connection server daemon (68.220.241.50:54954). Jan 27 13:01:29.566033 sshd[1798]: Accepted publickey for core from 68.220.241.50 port 54954 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:29.567845 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:29.575719 systemd-logind[1615]: New session 4 of user core. Jan 27 13:01:29.581847 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 27 13:01:29.857913 sshd[1802]: Connection closed by 68.220.241.50 port 54954 Jan 27 13:01:29.857657 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jan 27 13:01:29.865199 systemd[1]: sshd@2-10.230.66.190:22-68.220.241.50:54954.service: Deactivated successfully. Jan 27 13:01:29.867906 systemd[1]: session-4.scope: Deactivated successfully. Jan 27 13:01:29.870962 systemd-logind[1615]: Session 4 logged out. Waiting for processes to exit. Jan 27 13:01:29.872983 systemd-logind[1615]: Removed session 4. Jan 27 13:01:30.792120 login[1750]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:30.808640 systemd-logind[1615]: New session 5 of user core. Jan 27 13:01:30.814110 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 27 13:01:30.828165 login[1749]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:30.840890 systemd-logind[1615]: New session 6 of user core. Jan 27 13:01:30.850231 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 27 13:01:31.402556 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:31.424855 coreos-metadata[1603]: Jan 27 13:01:31.424 WARN failed to locate config-drive, using the metadata service API instead Jan 27 13:01:31.451828 coreos-metadata[1603]: Jan 27 13:01:31.451 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 27 13:01:31.460056 coreos-metadata[1603]: Jan 27 13:01:31.459 INFO Fetch failed with 404: resource not found Jan 27 13:01:31.460056 coreos-metadata[1603]: Jan 27 13:01:31.460 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 27 13:01:31.461012 coreos-metadata[1603]: Jan 27 13:01:31.460 INFO Fetch successful Jan 27 13:01:31.461202 coreos-metadata[1603]: Jan 27 13:01:31.461 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 27 13:01:31.480955 coreos-metadata[1603]: Jan 27 13:01:31.480 INFO Fetch successful Jan 27 13:01:31.481213 coreos-metadata[1603]: Jan 27 13:01:31.481 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 27 13:01:31.497604 coreos-metadata[1603]: Jan 27 13:01:31.497 INFO Fetch successful Jan 27 13:01:31.497847 coreos-metadata[1603]: Jan 27 13:01:31.497 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 27 13:01:31.514574 coreos-metadata[1603]: Jan 27 13:01:31.514 INFO Fetch successful Jan 27 13:01:31.514760 coreos-metadata[1603]: Jan 27 13:01:31.514 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 27 13:01:31.533088 coreos-metadata[1603]: Jan 27 13:01:31.533 INFO Fetch successful Jan 27 13:01:31.578630 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 27 13:01:31.579609 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 27 13:01:31.611647 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 27 13:01:31.627029 coreos-metadata[1702]: Jan 27 13:01:31.626 WARN failed to locate config-drive, using the metadata service API instead Jan 27 13:01:31.654889 coreos-metadata[1702]: Jan 27 13:01:31.654 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 27 13:01:31.696047 coreos-metadata[1702]: Jan 27 13:01:31.695 INFO Fetch successful Jan 27 13:01:31.696262 coreos-metadata[1702]: Jan 27 13:01:31.696 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 27 13:01:31.726571 coreos-metadata[1702]: Jan 27 13:01:31.726 INFO Fetch successful Jan 27 13:01:31.729767 unknown[1702]: wrote ssh authorized keys file for user: core Jan 27 13:01:31.768458 update-ssh-keys[1843]: Updated "/home/core/.ssh/authorized_keys" Jan 27 13:01:31.770838 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 27 13:01:31.775849 systemd[1]: Finished sshkeys.service. Jan 27 13:01:31.778423 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 27 13:01:31.780986 systemd[1]: Startup finished in 3.776s (kernel) + 14.841s (initrd) + 13.311s (userspace) = 31.929s. Jan 27 13:01:37.598766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 27 13:01:37.602757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:01:37.907812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:01:37.920240 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 13:01:37.988663 kubelet[1855]: E0127 13:01:37.988587 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 13:01:37.992997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 13:01:37.993255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 13:01:37.994342 systemd[1]: kubelet.service: Consumed 343ms CPU time, 108.8M memory peak. Jan 27 13:01:39.974394 systemd[1]: Started sshd@3-10.230.66.190:22-68.220.241.50:47634.service - OpenSSH per-connection server daemon (68.220.241.50:47634). Jan 27 13:01:40.491066 sshd[1862]: Accepted publickey for core from 68.220.241.50 port 47634 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:40.492747 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:40.503620 systemd-logind[1615]: New session 7 of user core. Jan 27 13:01:40.522971 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 27 13:01:40.772975 sshd[1866]: Connection closed by 68.220.241.50 port 47634 Jan 27 13:01:40.774319 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Jan 27 13:01:40.779555 systemd[1]: sshd@3-10.230.66.190:22-68.220.241.50:47634.service: Deactivated successfully. Jan 27 13:01:40.782935 systemd[1]: session-7.scope: Deactivated successfully. Jan 27 13:01:40.785811 systemd-logind[1615]: Session 7 logged out. Waiting for processes to exit. Jan 27 13:01:40.788824 systemd-logind[1615]: Removed session 7. Jan 27 13:01:40.883912 systemd[1]: Started sshd@4-10.230.66.190:22-68.220.241.50:47648.service - OpenSSH per-connection server daemon (68.220.241.50:47648). Jan 27 13:01:41.399287 sshd[1872]: Accepted publickey for core from 68.220.241.50 port 47648 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:41.401211 sshd-session[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:41.408763 systemd-logind[1615]: New session 8 of user core. Jan 27 13:01:41.417934 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 27 13:01:41.668388 sshd[1876]: Connection closed by 68.220.241.50 port 47648 Jan 27 13:01:41.669540 sshd-session[1872]: pam_unix(sshd:session): session closed for user core Jan 27 13:01:41.674427 systemd[1]: sshd@4-10.230.66.190:22-68.220.241.50:47648.service: Deactivated successfully. Jan 27 13:01:41.676942 systemd[1]: session-8.scope: Deactivated successfully. Jan 27 13:01:41.679916 systemd-logind[1615]: Session 8 logged out. Waiting for processes to exit. Jan 27 13:01:41.681185 systemd-logind[1615]: Removed session 8. Jan 27 13:01:41.780152 systemd[1]: Started sshd@5-10.230.66.190:22-68.220.241.50:47662.service - OpenSSH per-connection server daemon (68.220.241.50:47662). Jan 27 13:01:42.310052 sshd[1882]: Accepted publickey for core from 68.220.241.50 port 47662 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:42.312066 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:42.319593 systemd-logind[1615]: New session 9 of user core. Jan 27 13:01:42.326929 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 27 13:01:42.597369 sshd[1886]: Connection closed by 68.220.241.50 port 47662 Jan 27 13:01:42.598489 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Jan 27 13:01:42.604172 systemd[1]: sshd@5-10.230.66.190:22-68.220.241.50:47662.service: Deactivated successfully. Jan 27 13:01:42.607719 systemd[1]: session-9.scope: Deactivated successfully. Jan 27 13:01:42.610078 systemd-logind[1615]: Session 9 logged out. Waiting for processes to exit. Jan 27 13:01:42.611639 systemd-logind[1615]: Removed session 9. Jan 27 13:01:42.700827 systemd[1]: Started sshd@6-10.230.66.190:22-68.220.241.50:40446.service - OpenSSH per-connection server daemon (68.220.241.50:40446). Jan 27 13:01:43.221500 sshd[1892]: Accepted publickey for core from 68.220.241.50 port 40446 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:43.222838 sshd-session[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:43.237125 systemd-logind[1615]: New session 10 of user core. Jan 27 13:01:43.245827 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 27 13:01:43.428681 sudo[1897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 27 13:01:43.429185 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 13:01:43.445496 sudo[1897]: pam_unix(sudo:session): session closed for user root Jan 27 13:01:43.537872 sshd[1896]: Connection closed by 68.220.241.50 port 40446 Jan 27 13:01:43.537665 sshd-session[1892]: pam_unix(sshd:session): session closed for user core Jan 27 13:01:43.545849 systemd[1]: sshd@6-10.230.66.190:22-68.220.241.50:40446.service: Deactivated successfully. Jan 27 13:01:43.548688 systemd[1]: session-10.scope: Deactivated successfully. Jan 27 13:01:43.551785 systemd-logind[1615]: Session 10 logged out. Waiting for processes to exit. Jan 27 13:01:43.553406 systemd-logind[1615]: Removed session 10. Jan 27 13:01:43.637222 systemd[1]: Started sshd@7-10.230.66.190:22-68.220.241.50:40456.service - OpenSSH per-connection server daemon (68.220.241.50:40456). Jan 27 13:01:44.140935 sshd[1904]: Accepted publickey for core from 68.220.241.50 port 40456 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:44.142771 sshd-session[1904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:44.149921 systemd-logind[1615]: New session 11 of user core. Jan 27 13:01:44.158788 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 27 13:01:44.329737 sudo[1910]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 27 13:01:44.330219 sudo[1910]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 13:01:44.334334 sudo[1910]: pam_unix(sudo:session): session closed for user root Jan 27 13:01:44.344891 sudo[1909]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 27 13:01:44.345356 sudo[1909]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 13:01:44.356755 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 27 13:01:44.428000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 27 13:01:44.431160 kernel: kauditd_printk_skb: 119 callbacks suppressed Jan 27 13:01:44.431441 kernel: audit: type=1305 audit(1769518904.428:226): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 27 13:01:44.428000 audit[1934]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc014b9db0 a2=420 a3=0 items=0 ppid=1915 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:44.434110 augenrules[1934]: No rules Jan 27 13:01:44.435644 kernel: audit: type=1300 audit(1769518904.428:226): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc014b9db0 a2=420 a3=0 items=0 ppid=1915 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:44.439623 systemd[1]: audit-rules.service: Deactivated successfully. Jan 27 13:01:44.439989 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 27 13:01:44.442583 sudo[1909]: pam_unix(sudo:session): session closed for user root Jan 27 13:01:44.428000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 13:01:44.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.453204 kernel: audit: type=1327 audit(1769518904.428:226): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 27 13:01:44.453306 kernel: audit: type=1130 audit(1769518904.438:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.455562 kernel: audit: type=1131 audit(1769518904.438:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.441000 audit[1909]: USER_END pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.460709 kernel: audit: type=1106 audit(1769518904.441:229): pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.441000 audit[1909]: CRED_DISP pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.464692 kernel: audit: type=1104 audit(1769518904.441:230): pid=1909 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.537575 sshd[1908]: Connection closed by 68.220.241.50 port 40456 Jan 27 13:01:44.537797 sshd-session[1904]: pam_unix(sshd:session): session closed for user core Jan 27 13:01:44.538000 audit[1904]: USER_END pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:01:44.547633 kernel: audit: type=1106 audit(1769518904.538:231): pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:01:44.550053 systemd-logind[1615]: Session 11 logged out. Waiting for processes to exit. Jan 27 13:01:44.539000 audit[1904]: CRED_DISP pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:01:44.550868 systemd[1]: sshd@7-10.230.66.190:22-68.220.241.50:40456.service: Deactivated successfully. Jan 27 13:01:44.554260 systemd[1]: session-11.scope: Deactivated successfully. Jan 27 13:01:44.555557 kernel: audit: type=1104 audit(1769518904.539:232): pid=1904 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:01:44.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.66.190:22-68.220.241.50:40456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.560558 kernel: audit: type=1131 audit(1769518904.549:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.66.190:22-68.220.241.50:40456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:44.560919 systemd-logind[1615]: Removed session 11. Jan 27 13:01:44.651011 systemd[1]: Started sshd@8-10.230.66.190:22-68.220.241.50:40460.service - OpenSSH per-connection server daemon (68.220.241.50:40460). Jan 27 13:01:44.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.66.190:22-68.220.241.50:40460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:45.170000 audit[1943]: USER_ACCT pid=1943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:01:45.172308 sshd[1943]: Accepted publickey for core from 68.220.241.50 port 40460 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:01:45.172000 audit[1943]: CRED_ACQ pid=1943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:01:45.172000 audit[1943]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd75675000 a2=3 a3=0 items=0 ppid=1 pid=1943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:45.172000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:01:45.174930 sshd-session[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:01:45.183010 systemd-logind[1615]: New session 12 of user core. Jan 27 13:01:45.187788 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 27 13:01:45.191000 audit[1943]: USER_START pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:01:45.194000 audit[1947]: CRED_ACQ pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:01:45.364000 audit[1948]: USER_ACCT pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:01:45.366091 sudo[1948]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 27 13:01:45.364000 audit[1948]: CRED_REFR pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:01:45.365000 audit[1948]: USER_START pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:01:45.366692 sudo[1948]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 27 13:01:46.154546 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 27 13:01:46.170273 (dockerd)[1967]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 27 13:01:46.698387 dockerd[1967]: time="2026-01-27T13:01:46.698294245Z" level=info msg="Starting up" Jan 27 13:01:46.701077 dockerd[1967]: time="2026-01-27T13:01:46.700758697Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 27 13:01:46.728332 dockerd[1967]: time="2026-01-27T13:01:46.728261892Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 27 13:01:46.755569 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3809054169-merged.mount: Deactivated successfully. Jan 27 13:01:46.796355 dockerd[1967]: time="2026-01-27T13:01:46.796001711Z" level=info msg="Loading containers: start." Jan 27 13:01:46.825848 kernel: Initializing XFRM netlink socket Jan 27 13:01:46.908000 audit[2018]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.908000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd3b466010 a2=0 a3=0 items=0 ppid=1967 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 27 13:01:46.911000 audit[2020]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.911000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffddd5cfb30 a2=0 a3=0 items=0 ppid=1967 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 27 13:01:46.914000 audit[2022]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.914000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc81b18850 a2=0 a3=0 items=0 ppid=1967 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 27 13:01:46.918000 audit[2024]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.918000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce4cf2430 a2=0 a3=0 items=0 ppid=1967 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 27 13:01:46.921000 audit[2026]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.921000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa926cd30 a2=0 a3=0 items=0 ppid=1967 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 27 13:01:46.925000 audit[2028]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.925000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffee3886c10 a2=0 a3=0 items=0 ppid=1967 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 13:01:46.928000 audit[2030]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.928000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffa61b9350 a2=0 a3=0 items=0 ppid=1967 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.928000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 13:01:46.932000 audit[2032]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.932000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffeeec63350 a2=0 a3=0 items=0 ppid=1967 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.932000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 27 13:01:46.981000 audit[2035]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.981000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffea3f12170 a2=0 a3=0 items=0 ppid=1967 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 27 13:01:46.985000 audit[2037]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.985000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff613ae410 a2=0 a3=0 items=0 ppid=1967 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 27 13:01:46.988000 audit[2039]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.988000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffd62cfe70 a2=0 a3=0 items=0 ppid=1967 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 27 13:01:46.991000 audit[2041]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.991000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe3142a1b0 a2=0 a3=0 items=0 ppid=1967 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 13:01:46.994000 audit[2043]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:46.994000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd747f2c60 a2=0 a3=0 items=0 ppid=1967 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:46.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 27 13:01:47.048000 audit[2073]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.048000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff2eb72bb0 a2=0 a3=0 items=0 ppid=1967 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 27 13:01:47.052000 audit[2075]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.052000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff079e0b70 a2=0 a3=0 items=0 ppid=1967 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 27 13:01:47.055000 audit[2077]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.055000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5e3fa770 a2=0 a3=0 items=0 ppid=1967 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.055000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 27 13:01:47.058000 audit[2079]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.058000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd39f44100 a2=0 a3=0 items=0 ppid=1967 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 27 13:01:47.061000 audit[2081]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.061000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffac68cef0 a2=0 a3=0 items=0 ppid=1967 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 27 13:01:47.064000 audit[2083]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.064000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff990ea8f0 a2=0 a3=0 items=0 ppid=1967 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 13:01:47.067000 audit[2085]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.067000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc1da18f80 a2=0 a3=0 items=0 ppid=1967 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.067000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 13:01:47.070000 audit[2087]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.070000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdcf12c600 a2=0 a3=0 items=0 ppid=1967 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.070000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 27 13:01:47.074000 audit[2089]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.074000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdcbcd2fe0 a2=0 a3=0 items=0 ppid=1967 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.074000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 27 13:01:47.077000 audit[2091]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.077000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc240267c0 a2=0 a3=0 items=0 ppid=1967 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 27 13:01:47.080000 audit[2093]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.080000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc42b53870 a2=0 a3=0 items=0 ppid=1967 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 27 13:01:47.084000 audit[2095]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.084000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdc48de6c0 a2=0 a3=0 items=0 ppid=1967 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 27 13:01:47.087000 audit[2097]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.087000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd653753b0 a2=0 a3=0 items=0 ppid=1967 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.087000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 27 13:01:47.095000 audit[2102]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.095000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe96917410 a2=0 a3=0 items=0 ppid=1967 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.095000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 27 13:01:47.098000 audit[2104]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.098000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff44814b50 a2=0 a3=0 items=0 ppid=1967 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 27 13:01:47.102000 audit[2106]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.102000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe91b26660 a2=0 a3=0 items=0 ppid=1967 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 27 13:01:47.105000 audit[2108]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.105000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe97d5b9c0 a2=0 a3=0 items=0 ppid=1967 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.105000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 27 13:01:47.108000 audit[2110]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.108000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff8d8a0100 a2=0 a3=0 items=0 ppid=1967 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 27 13:01:47.111000 audit[2112]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:01:47.111000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffed34b19f0 a2=0 a3=0 items=0 ppid=1967 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.111000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 27 13:01:47.144000 audit[2116]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.144000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe20e6be60 a2=0 a3=0 items=0 ppid=1967 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 27 13:01:47.148000 audit[2118]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.148000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe054f11c0 a2=0 a3=0 items=0 ppid=1967 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 27 13:01:47.161000 audit[2126]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.161000 audit[2126]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd2a570ee0 a2=0 a3=0 items=0 ppid=1967 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.161000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 27 13:01:47.174000 audit[2132]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.174000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe8bca00a0 a2=0 a3=0 items=0 ppid=1967 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 27 13:01:47.178000 audit[2134]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.178000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff3dda4f10 a2=0 a3=0 items=0 ppid=1967 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 27 13:01:47.181000 audit[2136]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.181000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffcc97ccc0 a2=0 a3=0 items=0 ppid=1967 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 27 13:01:47.184000 audit[2138]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.184000 audit[2138]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc986937a0 a2=0 a3=0 items=0 ppid=1967 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 27 13:01:47.187000 audit[2140]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:01:47.187000 audit[2140]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd31685fd0 a2=0 a3=0 items=0 ppid=1967 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:01:47.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 27 13:01:47.189872 systemd-networkd[1541]: docker0: Link UP Jan 27 13:01:47.195504 dockerd[1967]: time="2026-01-27T13:01:47.195374663Z" level=info msg="Loading containers: done." Jan 27 13:01:47.216789 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3127118987-merged.mount: Deactivated successfully. Jan 27 13:01:47.224544 dockerd[1967]: time="2026-01-27T13:01:47.224311284Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 27 13:01:47.224544 dockerd[1967]: time="2026-01-27T13:01:47.224478169Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 27 13:01:47.224888 dockerd[1967]: time="2026-01-27T13:01:47.224862650Z" level=info msg="Initializing buildkit" Jan 27 13:01:47.255138 dockerd[1967]: time="2026-01-27T13:01:47.253421335Z" level=info msg="Completed buildkit initialization" Jan 27 13:01:47.265827 dockerd[1967]: time="2026-01-27T13:01:47.265779913Z" level=info msg="Daemon has completed initialization" Jan 27 13:01:47.266110 dockerd[1967]: time="2026-01-27T13:01:47.266033631Z" level=info msg="API listen on /run/docker.sock" Jan 27 13:01:47.266781 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 27 13:01:47.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:48.243991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 27 13:01:48.247712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:01:48.604664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:01:48.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:48.625432 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 13:01:48.698543 containerd[1636]: time="2026-01-27T13:01:48.697632056Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 27 13:01:48.720492 kubelet[2186]: E0127 13:01:48.720401 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 13:01:48.723675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 13:01:48.723915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 13:01:48.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 13:01:48.724998 systemd[1]: kubelet.service: Consumed 404ms CPU time, 108.6M memory peak. Jan 27 13:01:49.593000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520691636.mount: Deactivated successfully. Jan 27 13:01:51.943080 containerd[1636]: time="2026-01-27T13:01:51.942955370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:51.944761 containerd[1636]: time="2026-01-27T13:01:51.944722143Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 27 13:01:51.945747 containerd[1636]: time="2026-01-27T13:01:51.945076067Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:51.949648 containerd[1636]: time="2026-01-27T13:01:51.949593958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:51.951352 containerd[1636]: time="2026-01-27T13:01:51.951003589Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.253140147s" Jan 27 13:01:51.951352 containerd[1636]: time="2026-01-27T13:01:51.951090050Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 27 13:01:51.952549 containerd[1636]: time="2026-01-27T13:01:51.952464822Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 27 13:01:55.424070 containerd[1636]: time="2026-01-27T13:01:55.423979703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:55.431540 containerd[1636]: time="2026-01-27T13:01:55.430724046Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 27 13:01:55.432204 containerd[1636]: time="2026-01-27T13:01:55.432169652Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:55.436442 containerd[1636]: time="2026-01-27T13:01:55.436398074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:55.438030 containerd[1636]: time="2026-01-27T13:01:55.437889982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.485317323s" Jan 27 13:01:55.438184 containerd[1636]: time="2026-01-27T13:01:55.438155453Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 27 13:01:55.439070 containerd[1636]: time="2026-01-27T13:01:55.438948655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 27 13:01:56.248235 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 27 13:01:56.272580 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 27 13:01:56.272748 kernel: audit: type=1131 audit(1769518916.247:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:56.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:56.287000 audit: BPF prog-id=61 op=UNLOAD Jan 27 13:01:56.290604 kernel: audit: type=1334 audit(1769518916.287:287): prog-id=61 op=UNLOAD Jan 27 13:01:57.718552 containerd[1636]: time="2026-01-27T13:01:57.718449537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:57.720585 containerd[1636]: time="2026-01-27T13:01:57.720548368Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 27 13:01:57.721881 containerd[1636]: time="2026-01-27T13:01:57.721813762Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:57.725558 containerd[1636]: time="2026-01-27T13:01:57.725258623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:01:57.727164 containerd[1636]: time="2026-01-27T13:01:57.726912084Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.287695725s" Jan 27 13:01:57.727164 containerd[1636]: time="2026-01-27T13:01:57.726963074Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 27 13:01:57.728891 containerd[1636]: time="2026-01-27T13:01:57.728848430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 27 13:01:58.732919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 27 13:01:58.737537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:01:59.066778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:01:59.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:59.082531 kernel: audit: type=1130 audit(1769518919.065:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:01:59.098158 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 13:01:59.180218 kubelet[2276]: E0127 13:01:59.180064 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 13:01:59.182867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 13:01:59.183147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 13:01:59.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 13:01:59.184804 systemd[1]: kubelet.service: Consumed 241ms CPU time, 108.2M memory peak. Jan 27 13:01:59.188679 kernel: audit: type=1131 audit(1769518919.183:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 13:01:59.672312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207512129.mount: Deactivated successfully. Jan 27 13:02:00.771493 containerd[1636]: time="2026-01-27T13:02:00.771407652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:00.773592 containerd[1636]: time="2026-01-27T13:02:00.773548189Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 27 13:02:00.775489 containerd[1636]: time="2026-01-27T13:02:00.775396388Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:00.778246 containerd[1636]: time="2026-01-27T13:02:00.778170201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:00.779898 containerd[1636]: time="2026-01-27T13:02:00.779184169Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.050293919s" Jan 27 13:02:00.779898 containerd[1636]: time="2026-01-27T13:02:00.779242119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 27 13:02:00.780033 containerd[1636]: time="2026-01-27T13:02:00.779905903Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 27 13:02:01.743867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794338886.mount: Deactivated successfully. Jan 27 13:02:03.210260 containerd[1636]: time="2026-01-27T13:02:03.210143992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:03.212104 containerd[1636]: time="2026-01-27T13:02:03.211805013Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17572350" Jan 27 13:02:03.213035 containerd[1636]: time="2026-01-27T13:02:03.212995286Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:03.217373 containerd[1636]: time="2026-01-27T13:02:03.217303458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:03.219298 containerd[1636]: time="2026-01-27T13:02:03.219259484Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.439317704s" Jan 27 13:02:03.219443 containerd[1636]: time="2026-01-27T13:02:03.219398058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 27 13:02:03.220203 containerd[1636]: time="2026-01-27T13:02:03.220152090Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 27 13:02:04.205373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948654987.mount: Deactivated successfully. Jan 27 13:02:04.211531 containerd[1636]: time="2026-01-27T13:02:04.211459013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 13:02:04.213874 containerd[1636]: time="2026-01-27T13:02:04.213589217Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 27 13:02:04.214837 containerd[1636]: time="2026-01-27T13:02:04.214794462Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 13:02:04.221658 containerd[1636]: time="2026-01-27T13:02:04.221593690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 27 13:02:04.224240 containerd[1636]: time="2026-01-27T13:02:04.224158514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.003775565s" Jan 27 13:02:04.224240 containerd[1636]: time="2026-01-27T13:02:04.224201332Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 27 13:02:04.225678 containerd[1636]: time="2026-01-27T13:02:04.225503427Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 27 13:02:04.869721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4265675457.mount: Deactivated successfully. Jan 27 13:02:08.853951 update_engine[1617]: I20260127 13:02:08.853502 1617 update_attempter.cc:509] Updating boot flags... Jan 27 13:02:09.187778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 27 13:02:09.192402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:02:09.418807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:02:09.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:09.427212 kernel: audit: type=1130 audit(1769518929.418:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:09.445486 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 27 13:02:09.598687 kubelet[2416]: E0127 13:02:09.598593 2416 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 27 13:02:09.603112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 27 13:02:09.603649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 27 13:02:09.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 13:02:09.604814 systemd[1]: kubelet.service: Consumed 292ms CPU time, 109.9M memory peak. Jan 27 13:02:09.609557 kernel: audit: type=1131 audit(1769518929.604:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 13:02:12.413589 containerd[1636]: time="2026-01-27T13:02:12.413479742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:12.416097 containerd[1636]: time="2026-01-27T13:02:12.416045200Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 27 13:02:12.416989 containerd[1636]: time="2026-01-27T13:02:12.416934168Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:12.421884 containerd[1636]: time="2026-01-27T13:02:12.421822851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:12.428554 containerd[1636]: time="2026-01-27T13:02:12.426542465Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 8.196871244s" Jan 27 13:02:12.428554 containerd[1636]: time="2026-01-27T13:02:12.426601496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 27 13:02:16.412333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:02:16.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:16.413233 systemd[1]: kubelet.service: Consumed 292ms CPU time, 109.9M memory peak. Jan 27 13:02:16.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:16.423647 kernel: audit: type=1130 audit(1769518936.412:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:16.423828 kernel: audit: type=1131 audit(1769518936.412:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:16.427029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:02:16.469313 systemd[1]: Reload requested from client PID 2456 ('systemctl') (unit session-12.scope)... Jan 27 13:02:16.469382 systemd[1]: Reloading... Jan 27 13:02:16.633629 zram_generator::config[2500]: No configuration found. Jan 27 13:02:17.043971 systemd[1]: Reloading finished in 573 ms. Jan 27 13:02:17.080886 kernel: audit: type=1334 audit(1769518937.073:294): prog-id=65 op=LOAD Jan 27 13:02:17.081060 kernel: audit: type=1334 audit(1769518937.073:295): prog-id=44 op=UNLOAD Jan 27 13:02:17.081168 kernel: audit: type=1334 audit(1769518937.073:296): prog-id=66 op=LOAD Jan 27 13:02:17.073000 audit: BPF prog-id=65 op=LOAD Jan 27 13:02:17.073000 audit: BPF prog-id=44 op=UNLOAD Jan 27 13:02:17.073000 audit: BPF prog-id=66 op=LOAD Jan 27 13:02:17.085141 kernel: audit: type=1334 audit(1769518937.073:297): prog-id=67 op=LOAD Jan 27 13:02:17.085250 kernel: audit: type=1334 audit(1769518937.074:298): prog-id=45 op=UNLOAD Jan 27 13:02:17.073000 audit: BPF prog-id=67 op=LOAD Jan 27 13:02:17.074000 audit: BPF prog-id=45 op=UNLOAD Jan 27 13:02:17.089182 kernel: audit: type=1334 audit(1769518937.074:299): prog-id=46 op=UNLOAD Jan 27 13:02:17.089279 kernel: audit: type=1334 audit(1769518937.074:300): prog-id=68 op=LOAD Jan 27 13:02:17.074000 audit: BPF prog-id=46 op=UNLOAD Jan 27 13:02:17.074000 audit: BPF prog-id=68 op=LOAD Jan 27 13:02:17.075000 audit: BPF prog-id=64 op=UNLOAD Jan 27 13:02:17.077000 audit: BPF prog-id=69 op=LOAD Jan 27 13:02:17.077000 audit: BPF prog-id=50 op=UNLOAD Jan 27 13:02:17.078000 audit: BPF prog-id=70 op=LOAD Jan 27 13:02:17.078000 audit: BPF prog-id=71 op=LOAD Jan 27 13:02:17.078000 audit: BPF prog-id=51 op=UNLOAD Jan 27 13:02:17.078000 audit: BPF prog-id=52 op=UNLOAD Jan 27 13:02:17.082000 audit: BPF prog-id=72 op=LOAD Jan 27 13:02:17.082000 audit: BPF prog-id=58 op=UNLOAD Jan 27 13:02:17.082000 audit: BPF prog-id=73 op=LOAD Jan 27 13:02:17.082000 audit: BPF prog-id=74 op=LOAD Jan 27 13:02:17.082000 audit: BPF prog-id=59 op=UNLOAD Jan 27 13:02:17.082000 audit: BPF prog-id=60 op=UNLOAD Jan 27 13:02:17.082000 audit: BPF prog-id=75 op=LOAD Jan 27 13:02:17.093582 kernel: audit: type=1334 audit(1769518937.075:301): prog-id=64 op=UNLOAD Jan 27 13:02:17.083000 audit: BPF prog-id=47 op=UNLOAD Jan 27 13:02:17.083000 audit: BPF prog-id=76 op=LOAD Jan 27 13:02:17.083000 audit: BPF prog-id=77 op=LOAD Jan 27 13:02:17.083000 audit: BPF prog-id=48 op=UNLOAD Jan 27 13:02:17.083000 audit: BPF prog-id=49 op=UNLOAD Jan 27 13:02:17.083000 audit: BPF prog-id=78 op=LOAD Jan 27 13:02:17.083000 audit: BPF prog-id=56 op=UNLOAD Jan 27 13:02:17.086000 audit: BPF prog-id=79 op=LOAD Jan 27 13:02:17.086000 audit: BPF prog-id=53 op=UNLOAD Jan 27 13:02:17.086000 audit: BPF prog-id=80 op=LOAD Jan 27 13:02:17.086000 audit: BPF prog-id=41 op=UNLOAD Jan 27 13:02:17.086000 audit: BPF prog-id=81 op=LOAD Jan 27 13:02:17.086000 audit: BPF prog-id=82 op=LOAD Jan 27 13:02:17.087000 audit: BPF prog-id=42 op=UNLOAD Jan 27 13:02:17.087000 audit: BPF prog-id=43 op=UNLOAD Jan 27 13:02:17.089000 audit: BPF prog-id=83 op=LOAD Jan 27 13:02:17.089000 audit: BPF prog-id=57 op=UNLOAD Jan 27 13:02:17.089000 audit: BPF prog-id=84 op=LOAD Jan 27 13:02:17.089000 audit: BPF prog-id=85 op=LOAD Jan 27 13:02:17.089000 audit: BPF prog-id=54 op=UNLOAD Jan 27 13:02:17.089000 audit: BPF prog-id=55 op=UNLOAD Jan 27 13:02:17.118414 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 27 13:02:17.118674 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 27 13:02:17.119268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:02:17.119410 systemd[1]: kubelet.service: Consumed 167ms CPU time, 97.7M memory peak. Jan 27 13:02:17.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 27 13:02:17.122193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:02:17.353072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:02:17.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:17.364120 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 27 13:02:17.482043 kubelet[2572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:02:17.483366 kubelet[2572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 27 13:02:17.483366 kubelet[2572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:02:17.483366 kubelet[2572]: I0127 13:02:17.483137 2572 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 13:02:17.886057 kubelet[2572]: I0127 13:02:17.886005 2572 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 27 13:02:17.886362 kubelet[2572]: I0127 13:02:17.886342 2572 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 13:02:17.886859 kubelet[2572]: I0127 13:02:17.886837 2572 server.go:954] "Client rotation is on, will bootstrap in background" Jan 27 13:02:17.932690 kubelet[2572]: E0127 13:02:17.930561 2572 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.66.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:17.939024 kubelet[2572]: I0127 13:02:17.938980 2572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 13:02:17.965356 kubelet[2572]: I0127 13:02:17.965307 2572 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 13:02:17.977078 kubelet[2572]: I0127 13:02:17.977054 2572 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 27 13:02:17.980864 kubelet[2572]: I0127 13:02:17.980808 2572 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 13:02:17.981336 kubelet[2572]: I0127 13:02:17.980962 2572 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-4nwk8.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 13:02:17.988458 kubelet[2572]: I0127 13:02:17.988427 2572 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 13:02:17.988615 kubelet[2572]: I0127 13:02:17.988596 2572 container_manager_linux.go:304] "Creating device plugin manager" Jan 27 13:02:17.995652 kubelet[2572]: I0127 13:02:17.995612 2572 state_mem.go:36] "Initialized new in-memory state store" Jan 27 13:02:17.999735 kubelet[2572]: I0127 13:02:17.999709 2572 kubelet.go:446] "Attempting to sync node with API server" Jan 27 13:02:17.999906 kubelet[2572]: I0127 13:02:17.999877 2572 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 13:02:18.001665 kubelet[2572]: I0127 13:02:18.001639 2572 kubelet.go:352] "Adding apiserver pod source" Jan 27 13:02:18.001830 kubelet[2572]: I0127 13:02:18.001808 2572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 13:02:18.010538 kubelet[2572]: W0127 13:02:18.010280 2572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.66.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-4nwk8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.66.190:6443: connect: connection refused Jan 27 13:02:18.010538 kubelet[2572]: E0127 13:02:18.010402 2572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.66.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-4nwk8.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:18.012238 kubelet[2572]: I0127 13:02:18.011687 2572 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 27 13:02:18.014883 kubelet[2572]: I0127 13:02:18.014844 2572 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 13:02:18.019568 kubelet[2572]: W0127 13:02:18.019528 2572 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 27 13:02:18.022479 kubelet[2572]: I0127 13:02:18.022450 2572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 27 13:02:18.022585 kubelet[2572]: I0127 13:02:18.022547 2572 server.go:1287] "Started kubelet" Jan 27 13:02:18.025886 kubelet[2572]: W0127 13:02:18.025843 2572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.66.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.66.190:6443: connect: connection refused Jan 27 13:02:18.026391 kubelet[2572]: E0127 13:02:18.026041 2572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.66.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:18.026391 kubelet[2572]: I0127 13:02:18.026209 2572 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 13:02:18.029537 kubelet[2572]: I0127 13:02:18.029230 2572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 13:02:18.029963 kubelet[2572]: I0127 13:02:18.029934 2572 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 13:02:18.031786 kubelet[2572]: I0127 13:02:18.031763 2572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 13:02:18.036316 kubelet[2572]: E0127 13:02:18.031200 2572 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.66.190:6443/api/v1/namespaces/default/events\": dial tcp 10.230.66.190:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-4nwk8.gb1.brightbox.com.188e98194c27fdba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-4nwk8.gb1.brightbox.com,UID:srv-4nwk8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-4nwk8.gb1.brightbox.com,},FirstTimestamp:2026-01-27 13:02:18.022477242 +0000 UTC m=+0.596248746,LastTimestamp:2026-01-27 13:02:18.022477242 +0000 UTC m=+0.596248746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-4nwk8.gb1.brightbox.com,}" Jan 27 13:02:18.038771 kubelet[2572]: I0127 13:02:18.038745 2572 server.go:479] "Adding debug handlers to kubelet server" Jan 27 13:02:18.040241 kubelet[2572]: I0127 13:02:18.040215 2572 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 27 13:02:18.042956 kubelet[2572]: I0127 13:02:18.042931 2572 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 27 13:02:18.048199 kubelet[2572]: I0127 13:02:18.043206 2572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 27 13:02:18.048199 kubelet[2572]: E0127 13:02:18.046159 2572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" Jan 27 13:02:18.048199 kubelet[2572]: I0127 13:02:18.047675 2572 reconciler.go:26] "Reconciler: start to sync state" Jan 27 13:02:18.048199 kubelet[2572]: E0127 13:02:18.047808 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4nwk8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.190:6443: connect: connection refused" interval="200ms" Jan 27 13:02:18.048199 kubelet[2572]: W0127 13:02:18.047917 2572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.66.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.66.190:6443: connect: connection refused Jan 27 13:02:18.048199 kubelet[2572]: E0127 13:02:18.047969 2572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.66.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:18.048891 kubelet[2572]: I0127 13:02:18.048868 2572 factory.go:221] Registration of the systemd container factory successfully Jan 27 13:02:18.050133 kubelet[2572]: I0127 13:02:18.049870 2572 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 27 13:02:18.049000 audit[2584]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:18.049000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc69ef46b0 a2=0 a3=0 items=0 ppid=2572 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.049000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 13:02:18.053679 kubelet[2572]: E0127 13:02:18.053649 2572 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 27 13:02:18.053000 audit[2586]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:18.053000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc13366a00 a2=0 a3=0 items=0 ppid=2572 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 13:02:18.056540 kubelet[2572]: I0127 13:02:18.056233 2572 factory.go:221] Registration of the containerd container factory successfully Jan 27 13:02:18.061000 audit[2588]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:18.061000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe7f1749a0 a2=0 a3=0 items=0 ppid=2572 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 13:02:18.079000 audit[2592]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:18.079000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffff6104b10 a2=0 a3=0 items=0 ppid=2572 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.079000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 13:02:18.098000 audit[2597]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:18.098000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc41197280 a2=0 a3=0 items=0 ppid=2572 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 27 13:02:18.099070 kubelet[2572]: I0127 13:02:18.098927 2572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 13:02:18.099000 audit[2599]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:18.099000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe15b807d0 a2=0 a3=0 items=0 ppid=2572 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 27 13:02:18.101551 kubelet[2572]: I0127 13:02:18.101123 2572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 13:02:18.101551 kubelet[2572]: I0127 13:02:18.101179 2572 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 27 13:02:18.101551 kubelet[2572]: I0127 13:02:18.101229 2572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 27 13:02:18.101551 kubelet[2572]: I0127 13:02:18.101243 2572 kubelet.go:2382] "Starting kubelet main sync loop" Jan 27 13:02:18.101551 kubelet[2572]: E0127 13:02:18.101346 2572 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 13:02:18.101000 audit[2598]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:18.101000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd51ec13b0 a2=0 a3=0 items=0 ppid=2572 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 27 13:02:18.105782 kubelet[2572]: W0127 13:02:18.105408 2572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.66.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.66.190:6443: connect: connection refused Jan 27 13:02:18.105782 kubelet[2572]: E0127 13:02:18.105471 2572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.66.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:18.107000 audit[2601]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:18.107000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcd5946b0 a2=0 a3=0 items=0 ppid=2572 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 27 13:02:18.107995 kubelet[2572]: I0127 13:02:18.107335 2572 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 27 13:02:18.107995 kubelet[2572]: I0127 13:02:18.107358 2572 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 27 13:02:18.107995 kubelet[2572]: I0127 13:02:18.107391 2572 state_mem.go:36] "Initialized new in-memory state store" Jan 27 13:02:18.109618 kubelet[2572]: I0127 13:02:18.109597 2572 policy_none.go:49] "None policy: Start" Jan 27 13:02:18.109670 kubelet[2572]: I0127 13:02:18.109628 2572 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 27 13:02:18.109670 kubelet[2572]: I0127 13:02:18.109645 2572 state_mem.go:35] "Initializing new in-memory state store" Jan 27 13:02:18.109000 audit[2600]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:18.109000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcff89efc0 a2=0 a3=0 items=0 ppid=2572 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.109000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 27 13:02:18.111000 audit[2603]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:18.111000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff24554000 a2=0 a3=0 items=0 ppid=2572 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 27 13:02:18.112000 audit[2604]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2604 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:18.112000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc8235810 a2=0 a3=0 items=0 ppid=2572 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.112000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 27 13:02:18.114000 audit[2605]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:18.114000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbc2c68a0 a2=0 a3=0 items=0 ppid=2572 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.114000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 27 13:02:18.121111 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 27 13:02:18.138089 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 27 13:02:18.145656 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 27 13:02:18.147959 kubelet[2572]: E0127 13:02:18.147931 2572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" Jan 27 13:02:18.165264 kubelet[2572]: I0127 13:02:18.165217 2572 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 13:02:18.165693 kubelet[2572]: I0127 13:02:18.165659 2572 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 13:02:18.165752 kubelet[2572]: I0127 13:02:18.165704 2572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 13:02:18.167247 kubelet[2572]: I0127 13:02:18.167057 2572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 13:02:18.168850 kubelet[2572]: E0127 13:02:18.168802 2572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 27 13:02:18.169245 kubelet[2572]: E0127 13:02:18.168918 2572 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-4nwk8.gb1.brightbox.com\" not found" Jan 27 13:02:18.220853 systemd[1]: Created slice kubepods-burstable-podeeaade419e62e72979968504232eb47e.slice - libcontainer container kubepods-burstable-podeeaade419e62e72979968504232eb47e.slice. Jan 27 13:02:18.241557 kubelet[2572]: E0127 13:02:18.240935 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.247341 systemd[1]: Created slice kubepods-burstable-pod801d25e5b3fa444b0556b5e2e23f9f46.slice - libcontainer container kubepods-burstable-pod801d25e5b3fa444b0556b5e2e23f9f46.slice. Jan 27 13:02:18.250433 kubelet[2572]: E0127 13:02:18.249822 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4nwk8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.190:6443: connect: connection refused" interval="400ms" Jan 27 13:02:18.252031 kubelet[2572]: I0127 13:02:18.250068 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-k8s-certs\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252031 kubelet[2572]: I0127 13:02:18.251159 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/801d25e5b3fa444b0556b5e2e23f9f46-kubeconfig\") pod \"kube-scheduler-srv-4nwk8.gb1.brightbox.com\" (UID: \"801d25e5b3fa444b0556b5e2e23f9f46\") " pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252031 kubelet[2572]: I0127 13:02:18.251224 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce8034d5daeb76672de9ca75d73cb922-ca-certs\") pod \"kube-apiserver-srv-4nwk8.gb1.brightbox.com\" (UID: \"ce8034d5daeb76672de9ca75d73cb922\") " pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252031 kubelet[2572]: I0127 13:02:18.251258 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce8034d5daeb76672de9ca75d73cb922-k8s-certs\") pod \"kube-apiserver-srv-4nwk8.gb1.brightbox.com\" (UID: \"ce8034d5daeb76672de9ca75d73cb922\") " pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252031 kubelet[2572]: I0127 13:02:18.251298 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce8034d5daeb76672de9ca75d73cb922-usr-share-ca-certificates\") pod \"kube-apiserver-srv-4nwk8.gb1.brightbox.com\" (UID: \"ce8034d5daeb76672de9ca75d73cb922\") " pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252305 kubelet[2572]: I0127 13:02:18.251330 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-ca-certs\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252305 kubelet[2572]: I0127 13:02:18.251355 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-flexvolume-dir\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252305 kubelet[2572]: I0127 13:02:18.251382 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-kubeconfig\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252305 kubelet[2572]: I0127 13:02:18.251408 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.252305 kubelet[2572]: E0127 13:02:18.251785 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.255891 systemd[1]: Created slice kubepods-burstable-podce8034d5daeb76672de9ca75d73cb922.slice - libcontainer container kubepods-burstable-podce8034d5daeb76672de9ca75d73cb922.slice. Jan 27 13:02:18.258596 kubelet[2572]: E0127 13:02:18.258572 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.268602 kubelet[2572]: I0127 13:02:18.268569 2572 kubelet_node_status.go:75] "Attempting to register node" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.269350 kubelet[2572]: E0127 13:02:18.269316 2572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.190:6443/api/v1/nodes\": dial tcp 10.230.66.190:6443: connect: connection refused" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.472446 kubelet[2572]: I0127 13:02:18.472383 2572 kubelet_node_status.go:75] "Attempting to register node" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.473336 kubelet[2572]: E0127 13:02:18.473277 2572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.190:6443/api/v1/nodes\": dial tcp 10.230.66.190:6443: connect: connection refused" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.545340 containerd[1636]: time="2026-01-27T13:02:18.544596494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-4nwk8.gb1.brightbox.com,Uid:eeaade419e62e72979968504232eb47e,Namespace:kube-system,Attempt:0,}" Jan 27 13:02:18.553164 containerd[1636]: time="2026-01-27T13:02:18.553093762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-4nwk8.gb1.brightbox.com,Uid:801d25e5b3fa444b0556b5e2e23f9f46,Namespace:kube-system,Attempt:0,}" Jan 27 13:02:18.560970 containerd[1636]: time="2026-01-27T13:02:18.560556368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-4nwk8.gb1.brightbox.com,Uid:ce8034d5daeb76672de9ca75d73cb922,Namespace:kube-system,Attempt:0,}" Jan 27 13:02:18.652696 kubelet[2572]: E0127 13:02:18.652625 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4nwk8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.190:6443: connect: connection refused" interval="800ms" Jan 27 13:02:18.700569 containerd[1636]: time="2026-01-27T13:02:18.699710149Z" level=info msg="connecting to shim ca1c62c71a9a441301c1222de890c3e6196947e2aae54ab61f26573249851742" address="unix:///run/containerd/s/eac61f630a1b02289910c11a437a401bbfedf8b30974f267b3783949e6a3447d" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:02:18.706164 containerd[1636]: time="2026-01-27T13:02:18.705890486Z" level=info msg="connecting to shim 0e54f58dc4fa4b786725e7751bbd33b83f2b51f50e40a3b584cb3518ca02b714" address="unix:///run/containerd/s/386f5fa75aa7a6428e52e5ac6cbdf1832f4cc76d5a5c5dbb1d0787608ca5151a" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:02:18.709042 containerd[1636]: time="2026-01-27T13:02:18.709008178Z" level=info msg="connecting to shim bc223d2bf3ea0cce72841c69acaf2232c01aba47da13a6a5653c925b0f08f579" address="unix:///run/containerd/s/5205a1fb62b36267a7551579563a97ae7073d517cb8229318bd62993e94bac37" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:02:18.871743 systemd[1]: Started cri-containerd-0e54f58dc4fa4b786725e7751bbd33b83f2b51f50e40a3b584cb3518ca02b714.scope - libcontainer container 0e54f58dc4fa4b786725e7751bbd33b83f2b51f50e40a3b584cb3518ca02b714. Jan 27 13:02:18.876603 systemd[1]: Started cri-containerd-bc223d2bf3ea0cce72841c69acaf2232c01aba47da13a6a5653c925b0f08f579.scope - libcontainer container bc223d2bf3ea0cce72841c69acaf2232c01aba47da13a6a5653c925b0f08f579. Jan 27 13:02:18.879898 systemd[1]: Started cri-containerd-ca1c62c71a9a441301c1222de890c3e6196947e2aae54ab61f26573249851742.scope - libcontainer container ca1c62c71a9a441301c1222de890c3e6196947e2aae54ab61f26573249851742. Jan 27 13:02:18.885232 kubelet[2572]: I0127 13:02:18.885073 2572 kubelet_node_status.go:75] "Attempting to register node" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.885989 kubelet[2572]: E0127 13:02:18.885950 2572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.190:6443/api/v1/nodes\": dial tcp 10.230.66.190:6443: connect: connection refused" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:18.920000 audit: BPF prog-id=86 op=LOAD Jan 27 13:02:18.921000 audit: BPF prog-id=87 op=LOAD Jan 27 13:02:18.921000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2630 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323233643262663365613063636537323834316336396163616632 Jan 27 13:02:18.922000 audit: BPF prog-id=87 op=UNLOAD Jan 27 13:02:18.922000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323233643262663365613063636537323834316336396163616632 Jan 27 13:02:18.922000 audit: BPF prog-id=88 op=LOAD Jan 27 13:02:18.922000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2630 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323233643262663365613063636537323834316336396163616632 Jan 27 13:02:18.922000 audit: BPF prog-id=89 op=LOAD Jan 27 13:02:18.922000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2630 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323233643262663365613063636537323834316336396163616632 Jan 27 13:02:18.922000 audit: BPF prog-id=89 op=UNLOAD Jan 27 13:02:18.922000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323233643262663365613063636537323834316336396163616632 Jan 27 13:02:18.922000 audit: BPF prog-id=88 op=UNLOAD Jan 27 13:02:18.922000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323233643262663365613063636537323834316336396163616632 Jan 27 13:02:18.922000 audit: BPF prog-id=90 op=LOAD Jan 27 13:02:18.922000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2630 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323233643262663365613063636537323834316336396163616632 Jan 27 13:02:18.925000 audit: BPF prog-id=91 op=LOAD Jan 27 13:02:18.926000 audit: BPF prog-id=92 op=LOAD Jan 27 13:02:18.926000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2627 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316336326337316139613434313330316331323232646538393063 Jan 27 13:02:18.926000 audit: BPF prog-id=92 op=UNLOAD Jan 27 13:02:18.926000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316336326337316139613434313330316331323232646538393063 Jan 27 13:02:18.926000 audit: BPF prog-id=93 op=LOAD Jan 27 13:02:18.926000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2627 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316336326337316139613434313330316331323232646538393063 Jan 27 13:02:18.927000 audit: BPF prog-id=94 op=LOAD Jan 27 13:02:18.927000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2627 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316336326337316139613434313330316331323232646538393063 Jan 27 13:02:18.927000 audit: BPF prog-id=94 op=UNLOAD Jan 27 13:02:18.927000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316336326337316139613434313330316331323232646538393063 Jan 27 13:02:18.927000 audit: BPF prog-id=93 op=UNLOAD Jan 27 13:02:18.927000 audit[2663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316336326337316139613434313330316331323232646538393063 Jan 27 13:02:18.927000 audit: BPF prog-id=95 op=LOAD Jan 27 13:02:18.927000 audit[2663]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2627 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316336326337316139613434313330316331323232646538393063 Jan 27 13:02:18.932952 kubelet[2572]: W0127 13:02:18.931385 2572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.66.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.66.190:6443: connect: connection refused Jan 27 13:02:18.932952 kubelet[2572]: E0127 13:02:18.931897 2572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.66.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:18.940000 audit: BPF prog-id=96 op=LOAD Jan 27 13:02:18.943000 audit: BPF prog-id=97 op=LOAD Jan 27 13:02:18.943000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2635 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065353466353864633466613462373836373235653737353162626433 Jan 27 13:02:18.944000 audit: BPF prog-id=97 op=UNLOAD Jan 27 13:02:18.944000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065353466353864633466613462373836373235653737353162626433 Jan 27 13:02:18.945000 audit: BPF prog-id=98 op=LOAD Jan 27 13:02:18.945000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2635 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065353466353864633466613462373836373235653737353162626433 Jan 27 13:02:18.946000 audit: BPF prog-id=99 op=LOAD Jan 27 13:02:18.946000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2635 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065353466353864633466613462373836373235653737353162626433 Jan 27 13:02:18.946000 audit: BPF prog-id=99 op=UNLOAD Jan 27 13:02:18.946000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065353466353864633466613462373836373235653737353162626433 Jan 27 13:02:18.946000 audit: BPF prog-id=98 op=UNLOAD Jan 27 13:02:18.946000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065353466353864633466613462373836373235653737353162626433 Jan 27 13:02:18.946000 audit: BPF prog-id=100 op=LOAD Jan 27 13:02:18.946000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2635 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:18.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065353466353864633466613462373836373235653737353162626433 Jan 27 13:02:18.987338 kubelet[2572]: W0127 13:02:18.987171 2572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.66.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.66.190:6443: connect: connection refused Jan 27 13:02:18.987338 kubelet[2572]: E0127 13:02:18.987259 2572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.66.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:19.023634 containerd[1636]: time="2026-01-27T13:02:19.023501713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-4nwk8.gb1.brightbox.com,Uid:ce8034d5daeb76672de9ca75d73cb922,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc223d2bf3ea0cce72841c69acaf2232c01aba47da13a6a5653c925b0f08f579\"" Jan 27 13:02:19.034838 containerd[1636]: time="2026-01-27T13:02:19.034639801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-4nwk8.gb1.brightbox.com,Uid:eeaade419e62e72979968504232eb47e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca1c62c71a9a441301c1222de890c3e6196947e2aae54ab61f26573249851742\"" Jan 27 13:02:19.038592 containerd[1636]: time="2026-01-27T13:02:19.038532931Z" level=info msg="CreateContainer within sandbox \"bc223d2bf3ea0cce72841c69acaf2232c01aba47da13a6a5653c925b0f08f579\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 27 13:02:19.040382 kubelet[2572]: W0127 13:02:19.040221 2572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.66.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.66.190:6443: connect: connection refused Jan 27 13:02:19.040382 kubelet[2572]: E0127 13:02:19.040338 2572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.66.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:19.042050 containerd[1636]: time="2026-01-27T13:02:19.041694044Z" level=info msg="CreateContainer within sandbox \"ca1c62c71a9a441301c1222de890c3e6196947e2aae54ab61f26573249851742\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 27 13:02:19.053485 containerd[1636]: time="2026-01-27T13:02:19.053433821Z" level=info msg="Container f4d38e448d91d8b2c5d187cfa8d51ef9d1bb18156564a16246004eb0dd9472c0: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:02:19.061797 kubelet[2572]: W0127 13:02:19.060486 2572 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.66.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-4nwk8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.66.190:6443: connect: connection refused Jan 27 13:02:19.061897 kubelet[2572]: E0127 13:02:19.061821 2572 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.66.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-4nwk8.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.66.190:6443: connect: connection refused" logger="UnhandledError" Jan 27 13:02:19.068144 containerd[1636]: time="2026-01-27T13:02:19.068104051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-4nwk8.gb1.brightbox.com,Uid:801d25e5b3fa444b0556b5e2e23f9f46,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e54f58dc4fa4b786725e7751bbd33b83f2b51f50e40a3b584cb3518ca02b714\"" Jan 27 13:02:19.068974 containerd[1636]: time="2026-01-27T13:02:19.068924514Z" level=info msg="CreateContainer within sandbox \"bc223d2bf3ea0cce72841c69acaf2232c01aba47da13a6a5653c925b0f08f579\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4d38e448d91d8b2c5d187cfa8d51ef9d1bb18156564a16246004eb0dd9472c0\"" Jan 27 13:02:19.070141 containerd[1636]: time="2026-01-27T13:02:19.070110514Z" level=info msg="StartContainer for \"f4d38e448d91d8b2c5d187cfa8d51ef9d1bb18156564a16246004eb0dd9472c0\"" Jan 27 13:02:19.073538 containerd[1636]: time="2026-01-27T13:02:19.072786950Z" level=info msg="CreateContainer within sandbox \"0e54f58dc4fa4b786725e7751bbd33b83f2b51f50e40a3b584cb3518ca02b714\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 27 13:02:19.074471 containerd[1636]: time="2026-01-27T13:02:19.074440659Z" level=info msg="connecting to shim f4d38e448d91d8b2c5d187cfa8d51ef9d1bb18156564a16246004eb0dd9472c0" address="unix:///run/containerd/s/5205a1fb62b36267a7551579563a97ae7073d517cb8229318bd62993e94bac37" protocol=ttrpc version=3 Jan 27 13:02:19.082375 containerd[1636]: time="2026-01-27T13:02:19.082318242Z" level=info msg="Container d9f05b4b79fd79438c8b14b1ad09111b941fd60e48810a49c01c368deaa33beb: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:02:19.087702 containerd[1636]: time="2026-01-27T13:02:19.087655450Z" level=info msg="Container ea55bdee9e53922311f3321e4642d8825d75eabccd12276075a164e641213678: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:02:19.094239 containerd[1636]: time="2026-01-27T13:02:19.094197105Z" level=info msg="CreateContainer within sandbox \"ca1c62c71a9a441301c1222de890c3e6196947e2aae54ab61f26573249851742\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d9f05b4b79fd79438c8b14b1ad09111b941fd60e48810a49c01c368deaa33beb\"" Jan 27 13:02:19.095986 containerd[1636]: time="2026-01-27T13:02:19.095934127Z" level=info msg="StartContainer for \"d9f05b4b79fd79438c8b14b1ad09111b941fd60e48810a49c01c368deaa33beb\"" Jan 27 13:02:19.098683 containerd[1636]: time="2026-01-27T13:02:19.098115481Z" level=info msg="connecting to shim d9f05b4b79fd79438c8b14b1ad09111b941fd60e48810a49c01c368deaa33beb" address="unix:///run/containerd/s/eac61f630a1b02289910c11a437a401bbfedf8b30974f267b3783949e6a3447d" protocol=ttrpc version=3 Jan 27 13:02:19.103409 containerd[1636]: time="2026-01-27T13:02:19.103365865Z" level=info msg="CreateContainer within sandbox \"0e54f58dc4fa4b786725e7751bbd33b83f2b51f50e40a3b584cb3518ca02b714\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea55bdee9e53922311f3321e4642d8825d75eabccd12276075a164e641213678\"" Jan 27 13:02:19.104230 containerd[1636]: time="2026-01-27T13:02:19.104201004Z" level=info msg="StartContainer for \"ea55bdee9e53922311f3321e4642d8825d75eabccd12276075a164e641213678\"" Jan 27 13:02:19.104894 systemd[1]: Started cri-containerd-f4d38e448d91d8b2c5d187cfa8d51ef9d1bb18156564a16246004eb0dd9472c0.scope - libcontainer container f4d38e448d91d8b2c5d187cfa8d51ef9d1bb18156564a16246004eb0dd9472c0. Jan 27 13:02:19.112674 containerd[1636]: time="2026-01-27T13:02:19.112624264Z" level=info msg="connecting to shim ea55bdee9e53922311f3321e4642d8825d75eabccd12276075a164e641213678" address="unix:///run/containerd/s/386f5fa75aa7a6428e52e5ac6cbdf1832f4cc76d5a5c5dbb1d0787608ca5151a" protocol=ttrpc version=3 Jan 27 13:02:19.157764 systemd[1]: Started cri-containerd-d9f05b4b79fd79438c8b14b1ad09111b941fd60e48810a49c01c368deaa33beb.scope - libcontainer container d9f05b4b79fd79438c8b14b1ad09111b941fd60e48810a49c01c368deaa33beb. Jan 27 13:02:19.161315 systemd[1]: Started cri-containerd-ea55bdee9e53922311f3321e4642d8825d75eabccd12276075a164e641213678.scope - libcontainer container ea55bdee9e53922311f3321e4642d8825d75eabccd12276075a164e641213678. Jan 27 13:02:19.164000 audit: BPF prog-id=101 op=LOAD Jan 27 13:02:19.166000 audit: BPF prog-id=102 op=LOAD Jan 27 13:02:19.166000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2630 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634643338653434386439316438623263356431383763666138643531 Jan 27 13:02:19.166000 audit: BPF prog-id=102 op=UNLOAD Jan 27 13:02:19.166000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634643338653434386439316438623263356431383763666138643531 Jan 27 13:02:19.172000 audit: BPF prog-id=103 op=LOAD Jan 27 13:02:19.172000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2630 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634643338653434386439316438623263356431383763666138643531 Jan 27 13:02:19.172000 audit: BPF prog-id=104 op=LOAD Jan 27 13:02:19.172000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2630 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634643338653434386439316438623263356431383763666138643531 Jan 27 13:02:19.173000 audit: BPF prog-id=104 op=UNLOAD Jan 27 13:02:19.173000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634643338653434386439316438623263356431383763666138643531 Jan 27 13:02:19.173000 audit: BPF prog-id=103 op=UNLOAD Jan 27 13:02:19.173000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634643338653434386439316438623263356431383763666138643531 Jan 27 13:02:19.173000 audit: BPF prog-id=105 op=LOAD Jan 27 13:02:19.173000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2630 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634643338653434386439316438623263356431383763666138643531 Jan 27 13:02:19.206000 audit: BPF prog-id=106 op=LOAD Jan 27 13:02:19.207000 audit: BPF prog-id=107 op=LOAD Jan 27 13:02:19.207000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2627 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439663035623462373966643739343338633862313462316164303931 Jan 27 13:02:19.207000 audit: BPF prog-id=107 op=UNLOAD Jan 27 13:02:19.207000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439663035623462373966643739343338633862313462316164303931 Jan 27 13:02:19.207000 audit: BPF prog-id=108 op=LOAD Jan 27 13:02:19.207000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2627 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439663035623462373966643739343338633862313462316164303931 Jan 27 13:02:19.207000 audit: BPF prog-id=109 op=LOAD Jan 27 13:02:19.207000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2627 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439663035623462373966643739343338633862313462316164303931 Jan 27 13:02:19.207000 audit: BPF prog-id=109 op=UNLOAD Jan 27 13:02:19.207000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439663035623462373966643739343338633862313462316164303931 Jan 27 13:02:19.207000 audit: BPF prog-id=108 op=UNLOAD Jan 27 13:02:19.207000 audit[2756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2627 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439663035623462373966643739343338633862313462316164303931 Jan 27 13:02:19.207000 audit: BPF prog-id=110 op=LOAD Jan 27 13:02:19.207000 audit[2756]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2627 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439663035623462373966643739343338633862313462316164303931 Jan 27 13:02:19.212000 audit: BPF prog-id=111 op=LOAD Jan 27 13:02:19.214000 audit: BPF prog-id=112 op=LOAD Jan 27 13:02:19.214000 audit[2767]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2635 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353562646565396535333932323331316633333231653436343264 Jan 27 13:02:19.214000 audit: BPF prog-id=112 op=UNLOAD Jan 27 13:02:19.214000 audit[2767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353562646565396535333932323331316633333231653436343264 Jan 27 13:02:19.214000 audit: BPF prog-id=113 op=LOAD Jan 27 13:02:19.214000 audit[2767]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2635 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353562646565396535333932323331316633333231653436343264 Jan 27 13:02:19.214000 audit: BPF prog-id=114 op=LOAD Jan 27 13:02:19.214000 audit[2767]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2635 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353562646565396535333932323331316633333231653436343264 Jan 27 13:02:19.214000 audit: BPF prog-id=114 op=UNLOAD Jan 27 13:02:19.214000 audit[2767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353562646565396535333932323331316633333231653436343264 Jan 27 13:02:19.214000 audit: BPF prog-id=113 op=UNLOAD Jan 27 13:02:19.214000 audit[2767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2635 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353562646565396535333932323331316633333231653436343264 Jan 27 13:02:19.214000 audit: BPF prog-id=115 op=LOAD Jan 27 13:02:19.214000 audit[2767]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2635 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:19.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353562646565396535333932323331316633333231653436343264 Jan 27 13:02:19.249820 containerd[1636]: time="2026-01-27T13:02:19.249682819Z" level=info msg="StartContainer for \"f4d38e448d91d8b2c5d187cfa8d51ef9d1bb18156564a16246004eb0dd9472c0\" returns successfully" Jan 27 13:02:19.291354 containerd[1636]: time="2026-01-27T13:02:19.291294811Z" level=info msg="StartContainer for \"d9f05b4b79fd79438c8b14b1ad09111b941fd60e48810a49c01c368deaa33beb\" returns successfully" Jan 27 13:02:19.346877 containerd[1636]: time="2026-01-27T13:02:19.346780482Z" level=info msg="StartContainer for \"ea55bdee9e53922311f3321e4642d8825d75eabccd12276075a164e641213678\" returns successfully" Jan 27 13:02:19.453862 kubelet[2572]: E0127 13:02:19.453799 2572 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-4nwk8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.190:6443: connect: connection refused" interval="1.6s" Jan 27 13:02:19.691222 kubelet[2572]: I0127 13:02:19.691168 2572 kubelet_node_status.go:75] "Attempting to register node" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:19.691892 kubelet[2572]: E0127 13:02:19.691674 2572 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.190:6443/api/v1/nodes\": dial tcp 10.230.66.190:6443: connect: connection refused" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:20.136182 kubelet[2572]: E0127 13:02:20.136118 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:20.145659 kubelet[2572]: E0127 13:02:20.145614 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:20.147863 kubelet[2572]: E0127 13:02:20.147716 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:21.154218 kubelet[2572]: E0127 13:02:21.154174 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:21.155272 kubelet[2572]: E0127 13:02:21.155236 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:21.155884 kubelet[2572]: E0127 13:02:21.155860 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:21.296225 kubelet[2572]: I0127 13:02:21.296179 2572 kubelet_node_status.go:75] "Attempting to register node" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.154207 kubelet[2572]: E0127 13:02:22.154159 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.155160 kubelet[2572]: E0127 13:02:22.155131 2572 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.396763 kubelet[2572]: E0127 13:02:22.396697 2572 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-4nwk8.gb1.brightbox.com\" not found" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.489404 kubelet[2572]: I0127 13:02:22.489357 2572 kubelet_node_status.go:78] "Successfully registered node" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.548095 kubelet[2572]: I0127 13:02:22.548030 2572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.555343 kubelet[2572]: E0127 13:02:22.555292 2572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.555343 kubelet[2572]: I0127 13:02:22.555327 2572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.558412 kubelet[2572]: E0127 13:02:22.558383 2572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-4nwk8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.558500 kubelet[2572]: I0127 13:02:22.558415 2572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:22.561307 kubelet[2572]: E0127 13:02:22.561266 2572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-4nwk8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:23.026562 kubelet[2572]: I0127 13:02:23.026478 2572 apiserver.go:52] "Watching apiserver" Jan 27 13:02:23.048583 kubelet[2572]: I0127 13:02:23.048501 2572 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 27 13:02:23.154543 kubelet[2572]: I0127 13:02:23.154429 2572 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:23.157817 kubelet[2572]: E0127 13:02:23.157760 2572 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-4nwk8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:24.395088 systemd[1]: Reload requested from client PID 2840 ('systemctl') (unit session-12.scope)... Jan 27 13:02:24.395412 systemd[1]: Reloading... Jan 27 13:02:24.558555 zram_generator::config[2891]: No configuration found. Jan 27 13:02:24.942119 systemd[1]: Reloading finished in 545 ms. Jan 27 13:02:24.988036 kubelet[2572]: I0127 13:02:24.987721 2572 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 13:02:24.988237 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:02:25.013434 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 27 13:02:25.013967 kernel: audit: type=1131 audit(1769518945.007:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:25.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:25.007397 systemd[1]: kubelet.service: Deactivated successfully. Jan 27 13:02:25.007913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:02:25.018053 systemd[1]: kubelet.service: Consumed 1.207s CPU time, 129.5M memory peak. Jan 27 13:02:25.029043 kernel: audit: type=1334 audit(1769518945.025:399): prog-id=116 op=LOAD Jan 27 13:02:25.029185 kernel: audit: type=1334 audit(1769518945.025:400): prog-id=78 op=UNLOAD Jan 27 13:02:25.025000 audit: BPF prog-id=116 op=LOAD Jan 27 13:02:25.025000 audit: BPF prog-id=78 op=UNLOAD Jan 27 13:02:25.024925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 27 13:02:25.033553 kernel: audit: type=1334 audit(1769518945.027:401): prog-id=117 op=LOAD Jan 27 13:02:25.027000 audit: BPF prog-id=117 op=LOAD Jan 27 13:02:25.027000 audit: BPF prog-id=83 op=UNLOAD Jan 27 13:02:25.035544 kernel: audit: type=1334 audit(1769518945.027:402): prog-id=83 op=UNLOAD Jan 27 13:02:25.029000 audit: BPF prog-id=118 op=LOAD Jan 27 13:02:25.038542 kernel: audit: type=1334 audit(1769518945.029:403): prog-id=118 op=LOAD Jan 27 13:02:25.030000 audit: BPF prog-id=72 op=UNLOAD Jan 27 13:02:25.044073 kernel: audit: type=1334 audit(1769518945.030:404): prog-id=72 op=UNLOAD Jan 27 13:02:25.044182 kernel: audit: type=1334 audit(1769518945.030:405): prog-id=119 op=LOAD Jan 27 13:02:25.044237 kernel: audit: type=1334 audit(1769518945.030:406): prog-id=120 op=LOAD Jan 27 13:02:25.030000 audit: BPF prog-id=119 op=LOAD Jan 27 13:02:25.030000 audit: BPF prog-id=120 op=LOAD Jan 27 13:02:25.030000 audit: BPF prog-id=73 op=UNLOAD Jan 27 13:02:25.030000 audit: BPF prog-id=74 op=UNLOAD Jan 27 13:02:25.031000 audit: BPF prog-id=121 op=LOAD Jan 27 13:02:25.031000 audit: BPF prog-id=68 op=UNLOAD Jan 27 13:02:25.032000 audit: BPF prog-id=122 op=LOAD Jan 27 13:02:25.032000 audit: BPF prog-id=79 op=UNLOAD Jan 27 13:02:25.033000 audit: BPF prog-id=123 op=LOAD Jan 27 13:02:25.033000 audit: BPF prog-id=124 op=LOAD Jan 27 13:02:25.033000 audit: BPF prog-id=84 op=UNLOAD Jan 27 13:02:25.033000 audit: BPF prog-id=85 op=UNLOAD Jan 27 13:02:25.047535 kernel: audit: type=1334 audit(1769518945.030:407): prog-id=73 op=UNLOAD Jan 27 13:02:25.034000 audit: BPF prog-id=125 op=LOAD Jan 27 13:02:25.034000 audit: BPF prog-id=69 op=UNLOAD Jan 27 13:02:25.034000 audit: BPF prog-id=126 op=LOAD Jan 27 13:02:25.034000 audit: BPF prog-id=127 op=LOAD Jan 27 13:02:25.034000 audit: BPF prog-id=70 op=UNLOAD Jan 27 13:02:25.035000 audit: BPF prog-id=71 op=UNLOAD Jan 27 13:02:25.037000 audit: BPF prog-id=128 op=LOAD Jan 27 13:02:25.037000 audit: BPF prog-id=80 op=UNLOAD Jan 27 13:02:25.037000 audit: BPF prog-id=129 op=LOAD Jan 27 13:02:25.037000 audit: BPF prog-id=130 op=LOAD Jan 27 13:02:25.037000 audit: BPF prog-id=81 op=UNLOAD Jan 27 13:02:25.037000 audit: BPF prog-id=82 op=UNLOAD Jan 27 13:02:25.039000 audit: BPF prog-id=131 op=LOAD Jan 27 13:02:25.039000 audit: BPF prog-id=75 op=UNLOAD Jan 27 13:02:25.039000 audit: BPF prog-id=132 op=LOAD Jan 27 13:02:25.039000 audit: BPF prog-id=133 op=LOAD Jan 27 13:02:25.039000 audit: BPF prog-id=76 op=UNLOAD Jan 27 13:02:25.039000 audit: BPF prog-id=77 op=UNLOAD Jan 27 13:02:25.041000 audit: BPF prog-id=134 op=LOAD Jan 27 13:02:25.041000 audit: BPF prog-id=65 op=UNLOAD Jan 27 13:02:25.041000 audit: BPF prog-id=135 op=LOAD Jan 27 13:02:25.041000 audit: BPF prog-id=136 op=LOAD Jan 27 13:02:25.041000 audit: BPF prog-id=66 op=UNLOAD Jan 27 13:02:25.041000 audit: BPF prog-id=67 op=UNLOAD Jan 27 13:02:25.390707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 27 13:02:25.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:25.404154 (kubelet)[2952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 27 13:02:25.524841 kubelet[2952]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:02:25.524841 kubelet[2952]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 27 13:02:25.524841 kubelet[2952]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 13:02:25.533085 kubelet[2952]: I0127 13:02:25.532811 2952 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 13:02:25.545589 kubelet[2952]: I0127 13:02:25.545477 2952 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 27 13:02:25.545589 kubelet[2952]: I0127 13:02:25.545552 2952 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 13:02:25.546986 kubelet[2952]: I0127 13:02:25.546938 2952 server.go:954] "Client rotation is on, will bootstrap in background" Jan 27 13:02:25.552594 kubelet[2952]: I0127 13:02:25.552555 2952 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 13:02:25.557794 kubelet[2952]: I0127 13:02:25.557669 2952 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 27 13:02:25.588846 kubelet[2952]: I0127 13:02:25.588766 2952 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 13:02:25.604507 kubelet[2952]: I0127 13:02:25.604434 2952 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 27 13:02:25.614491 kubelet[2952]: I0127 13:02:25.613841 2952 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 13:02:25.614491 kubelet[2952]: I0127 13:02:25.613909 2952 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-4nwk8.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 13:02:25.614491 kubelet[2952]: I0127 13:02:25.614277 2952 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 13:02:25.614491 kubelet[2952]: I0127 13:02:25.614295 2952 container_manager_linux.go:304] "Creating device plugin manager" Jan 27 13:02:25.614895 kubelet[2952]: I0127 13:02:25.614401 2952 state_mem.go:36] "Initialized new in-memory state store" Jan 27 13:02:25.619948 kubelet[2952]: I0127 13:02:25.619922 2952 kubelet.go:446] "Attempting to sync node with API server" Jan 27 13:02:25.620086 kubelet[2952]: I0127 13:02:25.620066 2952 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 13:02:25.620286 kubelet[2952]: I0127 13:02:25.620266 2952 kubelet.go:352] "Adding apiserver pod source" Jan 27 13:02:25.621261 kubelet[2952]: I0127 13:02:25.621238 2952 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 13:02:25.630883 kubelet[2952]: I0127 13:02:25.630070 2952 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 27 13:02:25.632142 kubelet[2952]: I0127 13:02:25.632117 2952 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 13:02:25.650280 kubelet[2952]: I0127 13:02:25.650115 2952 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 27 13:02:25.650280 kubelet[2952]: I0127 13:02:25.650206 2952 server.go:1287] "Started kubelet" Jan 27 13:02:25.653820 kubelet[2952]: I0127 13:02:25.653758 2952 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 13:02:25.655847 kubelet[2952]: I0127 13:02:25.655806 2952 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 13:02:25.661489 kubelet[2952]: I0127 13:02:25.661461 2952 server.go:479] "Adding debug handlers to kubelet server" Jan 27 13:02:25.665456 kubelet[2952]: I0127 13:02:25.665368 2952 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 13:02:25.670068 kubelet[2952]: I0127 13:02:25.669625 2952 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 13:02:25.671603 kubelet[2952]: I0127 13:02:25.670427 2952 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 27 13:02:25.672901 kubelet[2952]: I0127 13:02:25.672868 2952 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 27 13:02:25.673426 kubelet[2952]: I0127 13:02:25.673403 2952 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 27 13:02:25.677743 kubelet[2952]: I0127 13:02:25.677717 2952 reconciler.go:26] "Reconciler: start to sync state" Jan 27 13:02:25.682000 kubelet[2952]: I0127 13:02:25.681960 2952 factory.go:221] Registration of the systemd container factory successfully Jan 27 13:02:25.682782 kubelet[2952]: I0127 13:02:25.682134 2952 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 27 13:02:25.691547 kubelet[2952]: I0127 13:02:25.691394 2952 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 13:02:25.693159 kubelet[2952]: I0127 13:02:25.693135 2952 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 13:02:25.693319 kubelet[2952]: I0127 13:02:25.693299 2952 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 27 13:02:25.693459 kubelet[2952]: I0127 13:02:25.693434 2952 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 27 13:02:25.695541 kubelet[2952]: I0127 13:02:25.693586 2952 kubelet.go:2382] "Starting kubelet main sync loop" Jan 27 13:02:25.695541 kubelet[2952]: E0127 13:02:25.693696 2952 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 13:02:25.705962 kubelet[2952]: I0127 13:02:25.705924 2952 factory.go:221] Registration of the containerd container factory successfully Jan 27 13:02:25.708589 kubelet[2952]: E0127 13:02:25.707952 2952 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 27 13:02:25.794051 kubelet[2952]: E0127 13:02:25.793989 2952 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 27 13:02:25.813332 kubelet[2952]: I0127 13:02:25.813261 2952 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 27 13:02:25.813773 kubelet[2952]: I0127 13:02:25.813621 2952 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 27 13:02:25.813773 kubelet[2952]: I0127 13:02:25.813711 2952 state_mem.go:36] "Initialized new in-memory state store" Jan 27 13:02:25.815531 kubelet[2952]: I0127 13:02:25.814502 2952 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 27 13:02:25.815531 kubelet[2952]: I0127 13:02:25.814548 2952 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 27 13:02:25.815531 kubelet[2952]: I0127 13:02:25.814596 2952 policy_none.go:49] "None policy: Start" Jan 27 13:02:25.815531 kubelet[2952]: I0127 13:02:25.814614 2952 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 27 13:02:25.815531 kubelet[2952]: I0127 13:02:25.814640 2952 state_mem.go:35] "Initializing new in-memory state store" Jan 27 13:02:25.815531 kubelet[2952]: I0127 13:02:25.814878 2952 state_mem.go:75] "Updated machine memory state" Jan 27 13:02:25.827095 kubelet[2952]: I0127 13:02:25.827043 2952 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 13:02:25.827759 kubelet[2952]: I0127 13:02:25.827738 2952 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 13:02:25.827949 kubelet[2952]: I0127 13:02:25.827886 2952 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 13:02:25.829100 kubelet[2952]: I0127 13:02:25.829079 2952 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 13:02:25.834327 kubelet[2952]: E0127 13:02:25.834284 2952 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 27 13:02:25.959049 kubelet[2952]: I0127 13:02:25.958433 2952 kubelet_node_status.go:75] "Attempting to register node" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:25.970804 kubelet[2952]: I0127 13:02:25.970424 2952 kubelet_node_status.go:124] "Node was previously registered" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:25.970804 kubelet[2952]: I0127 13:02:25.970598 2952 kubelet_node_status.go:78] "Successfully registered node" node="srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:25.997979 kubelet[2952]: I0127 13:02:25.997492 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.001478 kubelet[2952]: I0127 13:02:26.000919 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.001714 kubelet[2952]: I0127 13:02:26.001642 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.011751 kubelet[2952]: W0127 13:02:26.011693 2952 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 27 13:02:26.017319 kubelet[2952]: W0127 13:02:26.016610 2952 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 27 13:02:26.018577 kubelet[2952]: W0127 13:02:26.018495 2952 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 27 13:02:26.080161 kubelet[2952]: I0127 13:02:26.080076 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-ca-certs\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.080403 kubelet[2952]: I0127 13:02:26.080189 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-flexvolume-dir\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.080403 kubelet[2952]: I0127 13:02:26.080249 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-k8s-certs\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.080403 kubelet[2952]: I0127 13:02:26.080290 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce8034d5daeb76672de9ca75d73cb922-ca-certs\") pod \"kube-apiserver-srv-4nwk8.gb1.brightbox.com\" (UID: \"ce8034d5daeb76672de9ca75d73cb922\") " pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.080403 kubelet[2952]: I0127 13:02:26.080327 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce8034d5daeb76672de9ca75d73cb922-k8s-certs\") pod \"kube-apiserver-srv-4nwk8.gb1.brightbox.com\" (UID: \"ce8034d5daeb76672de9ca75d73cb922\") " pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.080403 kubelet[2952]: I0127 13:02:26.080357 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce8034d5daeb76672de9ca75d73cb922-usr-share-ca-certificates\") pod \"kube-apiserver-srv-4nwk8.gb1.brightbox.com\" (UID: \"ce8034d5daeb76672de9ca75d73cb922\") " pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.080704 kubelet[2952]: I0127 13:02:26.080398 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-kubeconfig\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.080704 kubelet[2952]: I0127 13:02:26.080432 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eeaade419e62e72979968504232eb47e-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" (UID: \"eeaade419e62e72979968504232eb47e\") " pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.080704 kubelet[2952]: I0127 13:02:26.080488 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/801d25e5b3fa444b0556b5e2e23f9f46-kubeconfig\") pod \"kube-scheduler-srv-4nwk8.gb1.brightbox.com\" (UID: \"801d25e5b3fa444b0556b5e2e23f9f46\") " pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.624503 kubelet[2952]: I0127 13:02:26.624357 2952 apiserver.go:52] "Watching apiserver" Jan 27 13:02:26.673872 kubelet[2952]: I0127 13:02:26.673797 2952 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 27 13:02:26.767058 kubelet[2952]: I0127 13:02:26.767012 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.767761 kubelet[2952]: I0127 13:02:26.767730 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.777783 kubelet[2952]: W0127 13:02:26.777731 2952 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 27 13:02:26.778055 kubelet[2952]: E0127 13:02:26.777821 2952 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-4nwk8.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.785819 kubelet[2952]: W0127 13:02:26.784443 2952 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 27 13:02:26.785819 kubelet[2952]: E0127 13:02:26.784546 2952 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-4nwk8.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" Jan 27 13:02:26.824451 kubelet[2952]: I0127 13:02:26.824089 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-4nwk8.gb1.brightbox.com" podStartSLOduration=0.823499645 podStartE2EDuration="823.499645ms" podCreationTimestamp="2026-01-27 13:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:02:26.806860608 +0000 UTC m=+1.381202801" watchObservedRunningTime="2026-01-27 13:02:26.823499645 +0000 UTC m=+1.397841812" Jan 27 13:02:26.841080 kubelet[2952]: I0127 13:02:26.840974 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-4nwk8.gb1.brightbox.com" podStartSLOduration=0.840941915 podStartE2EDuration="840.941915ms" podCreationTimestamp="2026-01-27 13:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:02:26.825412723 +0000 UTC m=+1.399754934" watchObservedRunningTime="2026-01-27 13:02:26.840941915 +0000 UTC m=+1.415284091" Jan 27 13:02:26.858638 kubelet[2952]: I0127 13:02:26.858537 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-4nwk8.gb1.brightbox.com" podStartSLOduration=0.858494942 podStartE2EDuration="858.494942ms" podCreationTimestamp="2026-01-27 13:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:02:26.841610518 +0000 UTC m=+1.415952705" watchObservedRunningTime="2026-01-27 13:02:26.858494942 +0000 UTC m=+1.432837118" Jan 27 13:02:30.868830 kubelet[2952]: I0127 13:02:30.868758 2952 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 27 13:02:30.869870 containerd[1636]: time="2026-01-27T13:02:30.869255192Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 27 13:02:30.870757 kubelet[2952]: I0127 13:02:30.869497 2952 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 27 13:02:31.743612 systemd[1]: Created slice kubepods-besteffort-pod0795cff2_d20e_4b74_9c51_8469487a693e.slice - libcontainer container kubepods-besteffort-pod0795cff2_d20e_4b74_9c51_8469487a693e.slice. Jan 27 13:02:31.817410 kubelet[2952]: I0127 13:02:31.817185 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0795cff2-d20e-4b74-9c51-8469487a693e-xtables-lock\") pod \"kube-proxy-spwfs\" (UID: \"0795cff2-d20e-4b74-9c51-8469487a693e\") " pod="kube-system/kube-proxy-spwfs" Jan 27 13:02:31.817410 kubelet[2952]: I0127 13:02:31.817269 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd98b\" (UniqueName: \"kubernetes.io/projected/0795cff2-d20e-4b74-9c51-8469487a693e-kube-api-access-rd98b\") pod \"kube-proxy-spwfs\" (UID: \"0795cff2-d20e-4b74-9c51-8469487a693e\") " pod="kube-system/kube-proxy-spwfs" Jan 27 13:02:31.817410 kubelet[2952]: I0127 13:02:31.817310 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0795cff2-d20e-4b74-9c51-8469487a693e-lib-modules\") pod \"kube-proxy-spwfs\" (UID: \"0795cff2-d20e-4b74-9c51-8469487a693e\") " pod="kube-system/kube-proxy-spwfs" Jan 27 13:02:31.817410 kubelet[2952]: I0127 13:02:31.817339 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0795cff2-d20e-4b74-9c51-8469487a693e-kube-proxy\") pod \"kube-proxy-spwfs\" (UID: \"0795cff2-d20e-4b74-9c51-8469487a693e\") " pod="kube-system/kube-proxy-spwfs" Jan 27 13:02:31.936633 systemd[1]: Created slice kubepods-besteffort-pod35ab507e_1536_43f3_b147_bec3865fe071.slice - libcontainer container kubepods-besteffort-pod35ab507e_1536_43f3_b147_bec3865fe071.slice. Jan 27 13:02:32.019415 kubelet[2952]: I0127 13:02:32.018611 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35ab507e-1536-43f3-b147-bec3865fe071-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kgwrw\" (UID: \"35ab507e-1536-43f3-b147-bec3865fe071\") " pod="tigera-operator/tigera-operator-7dcd859c48-kgwrw" Jan 27 13:02:32.019415 kubelet[2952]: I0127 13:02:32.018689 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrszp\" (UniqueName: \"kubernetes.io/projected/35ab507e-1536-43f3-b147-bec3865fe071-kube-api-access-mrszp\") pod \"tigera-operator-7dcd859c48-kgwrw\" (UID: \"35ab507e-1536-43f3-b147-bec3865fe071\") " pod="tigera-operator/tigera-operator-7dcd859c48-kgwrw" Jan 27 13:02:32.056939 containerd[1636]: time="2026-01-27T13:02:32.056876061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-spwfs,Uid:0795cff2-d20e-4b74-9c51-8469487a693e,Namespace:kube-system,Attempt:0,}" Jan 27 13:02:32.087059 containerd[1636]: time="2026-01-27T13:02:32.086912106Z" level=info msg="connecting to shim e4f28d9f2158eea5e3f244b81b675a680fc2a892de9ef1678d8d9fd7f3f9ed09" address="unix:///run/containerd/s/df65e1b47842b8fc2063a6cd1d16872091f6e11f2b6527ae5ecf0d04848c2372" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:02:32.130014 systemd[1]: Started cri-containerd-e4f28d9f2158eea5e3f244b81b675a680fc2a892de9ef1678d8d9fd7f3f9ed09.scope - libcontainer container e4f28d9f2158eea5e3f244b81b675a680fc2a892de9ef1678d8d9fd7f3f9ed09. Jan 27 13:02:32.164000 audit: BPF prog-id=137 op=LOAD Jan 27 13:02:32.173961 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 27 13:02:32.174175 kernel: audit: type=1334 audit(1769518952.164:442): prog-id=137 op=LOAD Jan 27 13:02:32.176000 audit: BPF prog-id=138 op=LOAD Jan 27 13:02:32.176000 audit[3019]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.181300 kernel: audit: type=1334 audit(1769518952.176:443): prog-id=138 op=LOAD Jan 27 13:02:32.181366 kernel: audit: type=1300 audit(1769518952.176:443): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.186182 kernel: audit: type=1327 audit(1769518952.176:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.189867 kernel: audit: type=1334 audit(1769518952.176:444): prog-id=138 op=UNLOAD Jan 27 13:02:32.176000 audit: BPF prog-id=138 op=UNLOAD Jan 27 13:02:32.176000 audit[3019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.192652 kernel: audit: type=1300 audit(1769518952.176:444): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.197755 kernel: audit: type=1327 audit(1769518952.176:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.180000 audit: BPF prog-id=139 op=LOAD Jan 27 13:02:32.202347 kernel: audit: type=1334 audit(1769518952.180:445): prog-id=139 op=LOAD Jan 27 13:02:32.180000 audit[3019]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.211290 kernel: audit: type=1300 audit(1769518952.180:445): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.211453 kernel: audit: type=1327 audit(1769518952.180:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.180000 audit: BPF prog-id=140 op=LOAD Jan 27 13:02:32.180000 audit[3019]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.180000 audit: BPF prog-id=140 op=UNLOAD Jan 27 13:02:32.180000 audit[3019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.180000 audit: BPF prog-id=139 op=UNLOAD Jan 27 13:02:32.180000 audit[3019]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.180000 audit: BPF prog-id=141 op=LOAD Jan 27 13:02:32.180000 audit[3019]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3007 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663238643966323135386565613565336632343462383162363735 Jan 27 13:02:32.241185 containerd[1636]: time="2026-01-27T13:02:32.240878720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-spwfs,Uid:0795cff2-d20e-4b74-9c51-8469487a693e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4f28d9f2158eea5e3f244b81b675a680fc2a892de9ef1678d8d9fd7f3f9ed09\"" Jan 27 13:02:32.245086 containerd[1636]: time="2026-01-27T13:02:32.245048722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kgwrw,Uid:35ab507e-1536-43f3-b147-bec3865fe071,Namespace:tigera-operator,Attempt:0,}" Jan 27 13:02:32.255318 containerd[1636]: time="2026-01-27T13:02:32.255239631Z" level=info msg="CreateContainer within sandbox \"e4f28d9f2158eea5e3f244b81b675a680fc2a892de9ef1678d8d9fd7f3f9ed09\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 27 13:02:32.286267 containerd[1636]: time="2026-01-27T13:02:32.286122197Z" level=info msg="connecting to shim b5b4f66a96da7bb48d4d20dcf2e0973c5d9586caf13f52d9354786cb282470c8" address="unix:///run/containerd/s/4d9c9064b805a2026d14f069af705e28391c889af33d3a3abc0380fabbad7318" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:02:32.297782 containerd[1636]: time="2026-01-27T13:02:32.297584732Z" level=info msg="Container 6bb4c6bc6e4fc489ef108d12c40a25d945306062e77cc4703501fb0f6e081f7b: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:02:32.310155 containerd[1636]: time="2026-01-27T13:02:32.310053980Z" level=info msg="CreateContainer within sandbox \"e4f28d9f2158eea5e3f244b81b675a680fc2a892de9ef1678d8d9fd7f3f9ed09\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6bb4c6bc6e4fc489ef108d12c40a25d945306062e77cc4703501fb0f6e081f7b\"" Jan 27 13:02:32.311620 containerd[1636]: time="2026-01-27T13:02:32.311542456Z" level=info msg="StartContainer for \"6bb4c6bc6e4fc489ef108d12c40a25d945306062e77cc4703501fb0f6e081f7b\"" Jan 27 13:02:32.318790 containerd[1636]: time="2026-01-27T13:02:32.318732120Z" level=info msg="connecting to shim 6bb4c6bc6e4fc489ef108d12c40a25d945306062e77cc4703501fb0f6e081f7b" address="unix:///run/containerd/s/df65e1b47842b8fc2063a6cd1d16872091f6e11f2b6527ae5ecf0d04848c2372" protocol=ttrpc version=3 Jan 27 13:02:32.351801 systemd[1]: Started cri-containerd-b5b4f66a96da7bb48d4d20dcf2e0973c5d9586caf13f52d9354786cb282470c8.scope - libcontainer container b5b4f66a96da7bb48d4d20dcf2e0973c5d9586caf13f52d9354786cb282470c8. Jan 27 13:02:32.370792 systemd[1]: Started cri-containerd-6bb4c6bc6e4fc489ef108d12c40a25d945306062e77cc4703501fb0f6e081f7b.scope - libcontainer container 6bb4c6bc6e4fc489ef108d12c40a25d945306062e77cc4703501fb0f6e081f7b. Jan 27 13:02:32.382000 audit: BPF prog-id=142 op=LOAD Jan 27 13:02:32.384000 audit: BPF prog-id=143 op=LOAD Jan 27 13:02:32.384000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3054 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235623466363661393664613762623438643464323064636632653039 Jan 27 13:02:32.384000 audit: BPF prog-id=143 op=UNLOAD Jan 27 13:02:32.384000 audit[3064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3054 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235623466363661393664613762623438643464323064636632653039 Jan 27 13:02:32.385000 audit: BPF prog-id=144 op=LOAD Jan 27 13:02:32.385000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3054 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235623466363661393664613762623438643464323064636632653039 Jan 27 13:02:32.385000 audit: BPF prog-id=145 op=LOAD Jan 27 13:02:32.385000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3054 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235623466363661393664613762623438643464323064636632653039 Jan 27 13:02:32.385000 audit: BPF prog-id=145 op=UNLOAD Jan 27 13:02:32.385000 audit[3064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3054 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235623466363661393664613762623438643464323064636632653039 Jan 27 13:02:32.385000 audit: BPF prog-id=144 op=UNLOAD Jan 27 13:02:32.385000 audit[3064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3054 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235623466363661393664613762623438643464323064636632653039 Jan 27 13:02:32.385000 audit: BPF prog-id=146 op=LOAD Jan 27 13:02:32.385000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3054 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235623466363661393664613762623438643464323064636632653039 Jan 27 13:02:32.470000 audit: BPF prog-id=147 op=LOAD Jan 27 13:02:32.470000 audit[3069]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3007 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623463366263366534666334383965663130386431326334306132 Jan 27 13:02:32.470000 audit: BPF prog-id=148 op=LOAD Jan 27 13:02:32.470000 audit[3069]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3007 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623463366263366534666334383965663130386431326334306132 Jan 27 13:02:32.470000 audit: BPF prog-id=148 op=UNLOAD Jan 27 13:02:32.470000 audit[3069]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623463366263366534666334383965663130386431326334306132 Jan 27 13:02:32.471000 audit: BPF prog-id=147 op=UNLOAD Jan 27 13:02:32.471000 audit[3069]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623463366263366534666334383965663130386431326334306132 Jan 27 13:02:32.471000 audit: BPF prog-id=149 op=LOAD Jan 27 13:02:32.471000 audit[3069]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3007 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:32.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662623463366263366534666334383965663130386431326334306132 Jan 27 13:02:32.489975 containerd[1636]: time="2026-01-27T13:02:32.489795681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kgwrw,Uid:35ab507e-1536-43f3-b147-bec3865fe071,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b5b4f66a96da7bb48d4d20dcf2e0973c5d9586caf13f52d9354786cb282470c8\"" Jan 27 13:02:32.498718 containerd[1636]: time="2026-01-27T13:02:32.497219646Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 27 13:02:32.527866 containerd[1636]: time="2026-01-27T13:02:32.527812389Z" level=info msg="StartContainer for \"6bb4c6bc6e4fc489ef108d12c40a25d945306062e77cc4703501fb0f6e081f7b\" returns successfully" Jan 27 13:02:32.805713 kubelet[2952]: I0127 13:02:32.805054 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-spwfs" podStartSLOduration=1.805032052 podStartE2EDuration="1.805032052s" podCreationTimestamp="2026-01-27 13:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:02:32.804338481 +0000 UTC m=+7.378680663" watchObservedRunningTime="2026-01-27 13:02:32.805032052 +0000 UTC m=+7.379374230" Jan 27 13:02:33.038000 audit[3153]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.038000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde3bd8080 a2=0 a3=7ffde3bd806c items=0 ppid=3095 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 27 13:02:33.048000 audit[3156]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.048000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc8b103f0 a2=0 a3=7ffcc8b103dc items=0 ppid=3095 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 27 13:02:33.053000 audit[3155]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.053000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe25bbb410 a2=0 a3=f1333751fea1da33 items=0 ppid=3095 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.053000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 27 13:02:33.059000 audit[3157]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.059000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3d612680 a2=0 a3=7ffc3d61266c items=0 ppid=3095 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 27 13:02:33.061000 audit[3158]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.061000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7221b190 a2=0 a3=7ffd7221b17c items=0 ppid=3095 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 27 13:02:33.063000 audit[3159]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.063000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8adcb370 a2=0 a3=7fff8adcb35c items=0 ppid=3095 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 27 13:02:33.152000 audit[3160]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.152000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd10d8b230 a2=0 a3=7ffd10d8b21c items=0 ppid=3095 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 27 13:02:33.158000 audit[3162]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.158000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd6c8a0580 a2=0 a3=7ffd6c8a056c items=0 ppid=3095 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 27 13:02:33.164000 audit[3165]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.164000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe9fb76d50 a2=0 a3=7ffe9fb76d3c items=0 ppid=3095 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 27 13:02:33.166000 audit[3166]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.166000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedad06d30 a2=0 a3=7ffedad06d1c items=0 ppid=3095 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.166000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 27 13:02:33.170000 audit[3168]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.170000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8cde0210 a2=0 a3=7fff8cde01fc items=0 ppid=3095 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 27 13:02:33.172000 audit[3169]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.172000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8f698420 a2=0 a3=7ffc8f69840c items=0 ppid=3095 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 27 13:02:33.177000 audit[3171]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.177000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc40c9c0b0 a2=0 a3=7ffc40c9c09c items=0 ppid=3095 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 27 13:02:33.183000 audit[3174]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.183000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf9f4a780 a2=0 a3=7ffdf9f4a76c items=0 ppid=3095 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 27 13:02:33.185000 audit[3175]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.185000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd848f4890 a2=0 a3=7ffd848f487c items=0 ppid=3095 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 27 13:02:33.189000 audit[3177]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.189000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff054e1ce0 a2=0 a3=7fff054e1ccc items=0 ppid=3095 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 27 13:02:33.192000 audit[3178]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.192000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc33efab80 a2=0 a3=7ffc33efab6c items=0 ppid=3095 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 27 13:02:33.196000 audit[3180]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.196000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff42fdb440 a2=0 a3=7fff42fdb42c items=0 ppid=3095 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 27 13:02:33.202000 audit[3183]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.202000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd99e21170 a2=0 a3=7ffd99e2115c items=0 ppid=3095 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.202000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 27 13:02:33.208000 audit[3186]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.208000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc80ab43f0 a2=0 a3=7ffc80ab43dc items=0 ppid=3095 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 27 13:02:33.211000 audit[3187]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.211000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda1cd41a0 a2=0 a3=7ffda1cd418c items=0 ppid=3095 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 27 13:02:33.216000 audit[3189]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.216000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffda20605f0 a2=0 a3=7ffda20605dc items=0 ppid=3095 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 13:02:33.222000 audit[3192]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.222000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe3ead5d10 a2=0 a3=7ffe3ead5cfc items=0 ppid=3095 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 13:02:33.224000 audit[3193]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.224000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe29fb3650 a2=0 a3=7ffe29fb363c items=0 ppid=3095 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 27 13:02:33.230000 audit[3195]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 27 13:02:33.230000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffee65d6600 a2=0 a3=7ffee65d65ec items=0 ppid=3095 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 27 13:02:33.263000 audit[3201]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:33.263000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5d105d20 a2=0 a3=7fff5d105d0c items=0 ppid=3095 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:33.275000 audit[3201]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:33.275000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff5d105d20 a2=0 a3=7fff5d105d0c items=0 ppid=3095 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:33.279000 audit[3206]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.279000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe54cc4bf0 a2=0 a3=7ffe54cc4bdc items=0 ppid=3095 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 27 13:02:33.287000 audit[3208]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.287000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff8457c380 a2=0 a3=7fff8457c36c items=0 ppid=3095 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 27 13:02:33.299000 audit[3211]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.299000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd0d296c00 a2=0 a3=7ffd0d296bec items=0 ppid=3095 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 27 13:02:33.301000 audit[3212]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.301000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe4d35c40 a2=0 a3=7fffe4d35c2c items=0 ppid=3095 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 27 13:02:33.305000 audit[3214]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.305000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc588003e0 a2=0 a3=7ffc588003cc items=0 ppid=3095 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 27 13:02:33.308000 audit[3215]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.308000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4ee05fa0 a2=0 a3=7ffc4ee05f8c items=0 ppid=3095 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 27 13:02:33.314000 audit[3217]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.314000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbdb69e60 a2=0 a3=7ffcbdb69e4c items=0 ppid=3095 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 27 13:02:33.322000 audit[3220]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3220 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.322000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcbd323090 a2=0 a3=7ffcbd32307c items=0 ppid=3095 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.322000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 27 13:02:33.325000 audit[3221]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.325000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0e3d8be0 a2=0 a3=7ffd0e3d8bcc items=0 ppid=3095 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 27 13:02:33.329000 audit[3223]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.329000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffed73dcd40 a2=0 a3=7ffed73dcd2c items=0 ppid=3095 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 27 13:02:33.331000 audit[3224]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.331000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd03c4a4f0 a2=0 a3=7ffd03c4a4dc items=0 ppid=3095 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 27 13:02:33.336000 audit[3226]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.336000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7ca79370 a2=0 a3=7ffe7ca7935c items=0 ppid=3095 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 27 13:02:33.342000 audit[3229]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.342000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc22bb7370 a2=0 a3=7ffc22bb735c items=0 ppid=3095 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 27 13:02:33.348000 audit[3232]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.348000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc2e9cffb0 a2=0 a3=7ffc2e9cff9c items=0 ppid=3095 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 27 13:02:33.350000 audit[3233]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.350000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdcfdb8ec0 a2=0 a3=7ffdcfdb8eac items=0 ppid=3095 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 27 13:02:33.355000 audit[3235]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.355000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe9d57f8d0 a2=0 a3=7ffe9d57f8bc items=0 ppid=3095 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 13:02:33.363000 audit[3238]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.363000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe0d434130 a2=0 a3=7ffe0d43411c items=0 ppid=3095 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 27 13:02:33.365000 audit[3239]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.365000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc33161a60 a2=0 a3=7ffc33161a4c items=0 ppid=3095 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 27 13:02:33.369000 audit[3241]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.369000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe5309b8e0 a2=0 a3=7ffe5309b8cc items=0 ppid=3095 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.369000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 27 13:02:33.370000 audit[3242]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.370000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce11bc4d0 a2=0 a3=7ffce11bc4bc items=0 ppid=3095 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 27 13:02:33.374000 audit[3244]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.374000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc025a8450 a2=0 a3=7ffc025a843c items=0 ppid=3095 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.374000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 13:02:33.380000 audit[3247]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 27 13:02:33.380000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe3ede04c0 a2=0 a3=7ffe3ede04ac items=0 ppid=3095 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 27 13:02:33.385000 audit[3249]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 27 13:02:33.385000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc5db9f2a0 a2=0 a3=7ffc5db9f28c items=0 ppid=3095 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.385000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:33.386000 audit[3249]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 27 13:02:33.386000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc5db9f2a0 a2=0 a3=7ffc5db9f28c items=0 ppid=3095 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:33.386000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:34.477202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714475988.mount: Deactivated successfully. Jan 27 13:02:36.019168 containerd[1636]: time="2026-01-27T13:02:36.019095716Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:36.022271 containerd[1636]: time="2026-01-27T13:02:36.022224514Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 27 13:02:36.023635 containerd[1636]: time="2026-01-27T13:02:36.023561080Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:36.043892 containerd[1636]: time="2026-01-27T13:02:36.043777371Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:36.046837 containerd[1636]: time="2026-01-27T13:02:36.046763632Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.549472719s" Jan 27 13:02:36.047286 containerd[1636]: time="2026-01-27T13:02:36.047074979Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 27 13:02:36.051617 containerd[1636]: time="2026-01-27T13:02:36.051573460Z" level=info msg="CreateContainer within sandbox \"b5b4f66a96da7bb48d4d20dcf2e0973c5d9586caf13f52d9354786cb282470c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 27 13:02:36.082339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2678319123.mount: Deactivated successfully. Jan 27 13:02:36.124039 containerd[1636]: time="2026-01-27T13:02:36.123968081Z" level=info msg="Container 7426ee7bca4a931cdd6fa32526f1c1854395c17bac77cf9a0ac0dc59fff8e1c8: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:02:36.134632 containerd[1636]: time="2026-01-27T13:02:36.134580891Z" level=info msg="CreateContainer within sandbox \"b5b4f66a96da7bb48d4d20dcf2e0973c5d9586caf13f52d9354786cb282470c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7426ee7bca4a931cdd6fa32526f1c1854395c17bac77cf9a0ac0dc59fff8e1c8\"" Jan 27 13:02:36.136964 containerd[1636]: time="2026-01-27T13:02:36.136922221Z" level=info msg="StartContainer for \"7426ee7bca4a931cdd6fa32526f1c1854395c17bac77cf9a0ac0dc59fff8e1c8\"" Jan 27 13:02:36.139154 containerd[1636]: time="2026-01-27T13:02:36.139035622Z" level=info msg="connecting to shim 7426ee7bca4a931cdd6fa32526f1c1854395c17bac77cf9a0ac0dc59fff8e1c8" address="unix:///run/containerd/s/4d9c9064b805a2026d14f069af705e28391c889af33d3a3abc0380fabbad7318" protocol=ttrpc version=3 Jan 27 13:02:36.177032 systemd[1]: Started cri-containerd-7426ee7bca4a931cdd6fa32526f1c1854395c17bac77cf9a0ac0dc59fff8e1c8.scope - libcontainer container 7426ee7bca4a931cdd6fa32526f1c1854395c17bac77cf9a0ac0dc59fff8e1c8. Jan 27 13:02:36.205000 audit: BPF prog-id=150 op=LOAD Jan 27 13:02:36.206000 audit: BPF prog-id=151 op=LOAD Jan 27 13:02:36.206000 audit[3260]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3054 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:36.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323665653762636134613933316364643666613332353236663163 Jan 27 13:02:36.206000 audit: BPF prog-id=151 op=UNLOAD Jan 27 13:02:36.206000 audit[3260]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3054 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:36.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323665653762636134613933316364643666613332353236663163 Jan 27 13:02:36.206000 audit: BPF prog-id=152 op=LOAD Jan 27 13:02:36.206000 audit[3260]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3054 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:36.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323665653762636134613933316364643666613332353236663163 Jan 27 13:02:36.206000 audit: BPF prog-id=153 op=LOAD Jan 27 13:02:36.206000 audit[3260]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3054 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:36.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323665653762636134613933316364643666613332353236663163 Jan 27 13:02:36.206000 audit: BPF prog-id=153 op=UNLOAD Jan 27 13:02:36.206000 audit[3260]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3054 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:36.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323665653762636134613933316364643666613332353236663163 Jan 27 13:02:36.206000 audit: BPF prog-id=152 op=UNLOAD Jan 27 13:02:36.206000 audit[3260]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3054 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:36.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323665653762636134613933316364643666613332353236663163 Jan 27 13:02:36.207000 audit: BPF prog-id=154 op=LOAD Jan 27 13:02:36.207000 audit[3260]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3054 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:36.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323665653762636134613933316364643666613332353236663163 Jan 27 13:02:36.248448 containerd[1636]: time="2026-01-27T13:02:36.248242733Z" level=info msg="StartContainer for \"7426ee7bca4a931cdd6fa32526f1c1854395c17bac77cf9a0ac0dc59fff8e1c8\" returns successfully" Jan 27 13:02:36.817761 kubelet[2952]: I0127 13:02:36.817522 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kgwrw" podStartSLOduration=2.264968704 podStartE2EDuration="5.817505899s" podCreationTimestamp="2026-01-27 13:02:31 +0000 UTC" firstStartedPulling="2026-01-27 13:02:32.496160562 +0000 UTC m=+7.070502723" lastFinishedPulling="2026-01-27 13:02:36.048697755 +0000 UTC m=+10.623039918" observedRunningTime="2026-01-27 13:02:36.816389349 +0000 UTC m=+11.390731536" watchObservedRunningTime="2026-01-27 13:02:36.817505899 +0000 UTC m=+11.391848080" Jan 27 13:02:43.692688 sudo[1948]: pam_unix(sudo:session): session closed for user root Jan 27 13:02:43.704634 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 27 13:02:43.705103 kernel: audit: type=1106 audit(1769518963.692:522): pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:02:43.692000 audit[1948]: USER_END pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:02:43.692000 audit[1948]: CRED_DISP pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:02:43.717553 kernel: audit: type=1104 audit(1769518963.692:523): pid=1948 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 27 13:02:43.801452 sshd[1947]: Connection closed by 68.220.241.50 port 40460 Jan 27 13:02:43.801006 sshd-session[1943]: pam_unix(sshd:session): session closed for user core Jan 27 13:02:43.818598 kernel: audit: type=1106 audit(1769518963.808:524): pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:02:43.808000 audit[1943]: USER_END pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:02:43.827040 systemd[1]: sshd@8-10.230.66.190:22-68.220.241.50:40460.service: Deactivated successfully. Jan 27 13:02:43.842805 kernel: audit: type=1104 audit(1769518963.815:525): pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:02:43.815000 audit[1943]: CRED_DISP pid=1943 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:02:43.847051 systemd[1]: session-12.scope: Deactivated successfully. Jan 27 13:02:43.847729 systemd[1]: session-12.scope: Consumed 6.704s CPU time, 153.4M memory peak. Jan 27 13:02:43.855124 systemd-logind[1615]: Session 12 logged out. Waiting for processes to exit. Jan 27 13:02:43.860663 systemd-logind[1615]: Removed session 12. Jan 27 13:02:43.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.66.190:22-68.220.241.50:40460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:43.867760 kernel: audit: type=1131 audit(1769518963.828:526): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.66.190:22-68.220.241.50:40460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:02:45.045000 audit[3337]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:45.056879 kernel: audit: type=1325 audit(1769518965.045:527): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:45.045000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd261f3ad0 a2=0 a3=7ffd261f3abc items=0 ppid=3095 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:45.070976 kernel: audit: type=1300 audit(1769518965.045:527): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd261f3ad0 a2=0 a3=7ffd261f3abc items=0 ppid=3095 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:45.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:45.076735 kernel: audit: type=1327 audit(1769518965.045:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:45.076896 kernel: audit: type=1325 audit(1769518965.064:528): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:45.064000 audit[3337]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:45.082680 kernel: audit: type=1300 audit(1769518965.064:528): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd261f3ad0 a2=0 a3=0 items=0 ppid=3095 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:45.064000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd261f3ad0 a2=0 a3=0 items=0 ppid=3095 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:45.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:45.124000 audit[3339]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:45.124000 audit[3339]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8c1695b0 a2=0 a3=7ffe8c16959c items=0 ppid=3095 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:45.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:45.128000 audit[3339]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:45.128000 audit[3339]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8c1695b0 a2=0 a3=0 items=0 ppid=3095 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:45.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:48.549000 audit[3342]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:48.549000 audit[3342]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff5dc80660 a2=0 a3=7fff5dc8064c items=0 ppid=3095 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:48.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:48.556000 audit[3342]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:48.556000 audit[3342]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff5dc80660 a2=0 a3=0 items=0 ppid=3095 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:48.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:48.686000 audit[3344]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:48.686000 audit[3344]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcf5d7a9f0 a2=0 a3=7ffcf5d7a9dc items=0 ppid=3095 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:48.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:48.691000 audit[3344]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:48.691000 audit[3344]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf5d7a9f0 a2=0 a3=0 items=0 ppid=3095 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:48.691000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:49.716735 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 27 13:02:49.717025 kernel: audit: type=1325 audit(1769518969.705:535): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:49.705000 audit[3350]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:49.705000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff29109cf0 a2=0 a3=7fff29109cdc items=0 ppid=3095 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:49.725545 kernel: audit: type=1300 audit(1769518969.705:535): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff29109cf0 a2=0 a3=7fff29109cdc items=0 ppid=3095 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:49.735548 kernel: audit: type=1327 audit(1769518969.705:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:49.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:49.744000 audit[3350]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:49.753705 kernel: audit: type=1325 audit(1769518969.744:536): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:49.753778 kernel: audit: type=1300 audit(1769518969.744:536): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff29109cf0 a2=0 a3=0 items=0 ppid=3095 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:49.744000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff29109cf0 a2=0 a3=0 items=0 ppid=3095 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:49.756951 kernel: audit: type=1327 audit(1769518969.744:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:49.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:50.734556 kernel: audit: type=1325 audit(1769518970.722:537): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:50.722000 audit[3352]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:50.722000 audit[3352]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd3adf0c50 a2=0 a3=7ffd3adf0c3c items=0 ppid=3095 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:50.748266 kernel: audit: type=1300 audit(1769518970.722:537): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd3adf0c50 a2=0 a3=7ffd3adf0c3c items=0 ppid=3095 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:50.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:50.753540 kernel: audit: type=1327 audit(1769518970.722:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:50.754000 audit[3352]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:50.758598 kernel: audit: type=1325 audit(1769518970.754:538): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:50.754000 audit[3352]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3adf0c50 a2=0 a3=0 items=0 ppid=3095 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:50.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:50.804024 systemd[1]: Created slice kubepods-besteffort-podf698f771_dc5d_4e33_803b_e88c6ed75e62.slice - libcontainer container kubepods-besteffort-podf698f771_dc5d_4e33_803b_e88c6ed75e62.slice. Jan 27 13:02:50.852082 kubelet[2952]: I0127 13:02:50.851640 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f698f771-dc5d-4e33-803b-e88c6ed75e62-tigera-ca-bundle\") pod \"calico-typha-7577bbd776-t6xvg\" (UID: \"f698f771-dc5d-4e33-803b-e88c6ed75e62\") " pod="calico-system/calico-typha-7577bbd776-t6xvg" Jan 27 13:02:50.852082 kubelet[2952]: I0127 13:02:50.851761 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2sz6\" (UniqueName: \"kubernetes.io/projected/f698f771-dc5d-4e33-803b-e88c6ed75e62-kube-api-access-c2sz6\") pod \"calico-typha-7577bbd776-t6xvg\" (UID: \"f698f771-dc5d-4e33-803b-e88c6ed75e62\") " pod="calico-system/calico-typha-7577bbd776-t6xvg" Jan 27 13:02:50.852082 kubelet[2952]: I0127 13:02:50.851794 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f698f771-dc5d-4e33-803b-e88c6ed75e62-typha-certs\") pod \"calico-typha-7577bbd776-t6xvg\" (UID: \"f698f771-dc5d-4e33-803b-e88c6ed75e62\") " pod="calico-system/calico-typha-7577bbd776-t6xvg" Jan 27 13:02:50.951601 systemd[1]: Created slice kubepods-besteffort-pod31788857_f0bc_43f7_8bad_13eb7c5b0605.slice - libcontainer container kubepods-besteffort-pod31788857_f0bc_43f7_8bad_13eb7c5b0605.slice. Jan 27 13:02:50.954543 kubelet[2952]: I0127 13:02:50.954426 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-flexvol-driver-host\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.954673 kubelet[2952]: I0127 13:02:50.954555 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-xtables-lock\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.954673 kubelet[2952]: I0127 13:02:50.954597 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwk7l\" (UniqueName: \"kubernetes.io/projected/31788857-f0bc-43f7-8bad-13eb7c5b0605-kube-api-access-cwk7l\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.954673 kubelet[2952]: I0127 13:02:50.954629 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-cni-bin-dir\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.954673 kubelet[2952]: I0127 13:02:50.954666 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-var-lib-calico\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.955051 kubelet[2952]: I0127 13:02:50.954714 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-var-run-calico\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.955051 kubelet[2952]: I0127 13:02:50.954753 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-cni-log-dir\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.955051 kubelet[2952]: I0127 13:02:50.954784 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-cni-net-dir\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.955051 kubelet[2952]: I0127 13:02:50.954815 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-lib-modules\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.955051 kubelet[2952]: I0127 13:02:50.954869 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31788857-f0bc-43f7-8bad-13eb7c5b0605-policysync\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.955269 kubelet[2952]: I0127 13:02:50.954907 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31788857-f0bc-43f7-8bad-13eb7c5b0605-tigera-ca-bundle\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:50.955269 kubelet[2952]: I0127 13:02:50.954950 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31788857-f0bc-43f7-8bad-13eb7c5b0605-node-certs\") pod \"calico-node-2wzqg\" (UID: \"31788857-f0bc-43f7-8bad-13eb7c5b0605\") " pod="calico-system/calico-node-2wzqg" Jan 27 13:02:51.063770 kubelet[2952]: E0127 13:02:51.063596 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:02:51.069042 kubelet[2952]: E0127 13:02:51.068982 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.069286 kubelet[2952]: W0127 13:02:51.069251 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.069572 kubelet[2952]: E0127 13:02:51.069540 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.076804 kubelet[2952]: E0127 13:02:51.076756 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.076804 kubelet[2952]: W0127 13:02:51.076789 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.077053 kubelet[2952]: E0127 13:02:51.076849 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.103555 kubelet[2952]: E0127 13:02:51.101787 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.103555 kubelet[2952]: W0127 13:02:51.101843 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.103555 kubelet[2952]: E0127 13:02:51.101872 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.123163 containerd[1636]: time="2026-01-27T13:02:51.121985563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7577bbd776-t6xvg,Uid:f698f771-dc5d-4e33-803b-e88c6ed75e62,Namespace:calico-system,Attempt:0,}" Jan 27 13:02:51.147778 kubelet[2952]: E0127 13:02:51.146043 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.147778 kubelet[2952]: W0127 13:02:51.147569 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.147778 kubelet[2952]: E0127 13:02:51.147605 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.148495 kubelet[2952]: E0127 13:02:51.148227 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.148495 kubelet[2952]: W0127 13:02:51.148257 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.148495 kubelet[2952]: E0127 13:02:51.148274 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.148949 kubelet[2952]: E0127 13:02:51.148657 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.148949 kubelet[2952]: W0127 13:02:51.148671 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.148949 kubelet[2952]: E0127 13:02:51.148686 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.150860 kubelet[2952]: E0127 13:02:51.150655 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.150860 kubelet[2952]: W0127 13:02:51.150675 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.150860 kubelet[2952]: E0127 13:02:51.150692 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.151332 kubelet[2952]: E0127 13:02:51.151141 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.151332 kubelet[2952]: W0127 13:02:51.151159 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.151332 kubelet[2952]: E0127 13:02:51.151176 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.152091 kubelet[2952]: E0127 13:02:51.151843 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.152091 kubelet[2952]: W0127 13:02:51.151885 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.152091 kubelet[2952]: E0127 13:02:51.151903 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.152835 kubelet[2952]: E0127 13:02:51.152653 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.152835 kubelet[2952]: W0127 13:02:51.152672 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.152835 kubelet[2952]: E0127 13:02:51.152699 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.153358 kubelet[2952]: E0127 13:02:51.153146 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.153566 kubelet[2952]: W0127 13:02:51.153464 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.153794 kubelet[2952]: E0127 13:02:51.153491 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.154194 kubelet[2952]: E0127 13:02:51.154134 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.154194 kubelet[2952]: W0127 13:02:51.154154 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.154739 kubelet[2952]: E0127 13:02:51.154171 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.155099 kubelet[2952]: E0127 13:02:51.154983 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.155489 kubelet[2952]: W0127 13:02:51.155169 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.155489 kubelet[2952]: E0127 13:02:51.155196 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.156048 kubelet[2952]: E0127 13:02:51.155943 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.156142 kubelet[2952]: W0127 13:02:51.156121 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.158553 kubelet[2952]: E0127 13:02:51.156301 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.159465 kubelet[2952]: E0127 13:02:51.159075 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.159465 kubelet[2952]: W0127 13:02:51.159101 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.159465 kubelet[2952]: E0127 13:02:51.159126 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.160297 kubelet[2952]: E0127 13:02:51.160062 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.160297 kubelet[2952]: W0127 13:02:51.160084 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.160297 kubelet[2952]: E0127 13:02:51.160100 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.163329 kubelet[2952]: E0127 13:02:51.162962 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.163329 kubelet[2952]: W0127 13:02:51.162989 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.163329 kubelet[2952]: E0127 13:02:51.163019 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.164012 kubelet[2952]: E0127 13:02:51.163852 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.164424 kubelet[2952]: W0127 13:02:51.164088 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.164424 kubelet[2952]: E0127 13:02:51.164113 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.165003 kubelet[2952]: E0127 13:02:51.164982 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.165272 kubelet[2952]: W0127 13:02:51.165182 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.165272 kubelet[2952]: E0127 13:02:51.165213 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.166384 kubelet[2952]: E0127 13:02:51.166046 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.166384 kubelet[2952]: W0127 13:02:51.166073 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.166384 kubelet[2952]: E0127 13:02:51.166098 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.167188 kubelet[2952]: E0127 13:02:51.166974 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.167188 kubelet[2952]: W0127 13:02:51.166992 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.167188 kubelet[2952]: E0127 13:02:51.167007 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.167794 kubelet[2952]: E0127 13:02:51.167680 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.168229 kubelet[2952]: W0127 13:02:51.167988 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.168229 kubelet[2952]: E0127 13:02:51.168014 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.169414 kubelet[2952]: E0127 13:02:51.169394 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.169506 kubelet[2952]: W0127 13:02:51.169487 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.169636 kubelet[2952]: E0127 13:02:51.169615 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.177193 kubelet[2952]: E0127 13:02:51.177162 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.178667 kubelet[2952]: W0127 13:02:51.178413 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.178667 kubelet[2952]: E0127 13:02:51.178452 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.178667 kubelet[2952]: I0127 13:02:51.178495 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8a652343-1e00-4d74-90a4-253edca0200b-kubelet-dir\") pod \"csi-node-driver-gghgq\" (UID: \"8a652343-1e00-4d74-90a4-253edca0200b\") " pod="calico-system/csi-node-driver-gghgq" Jan 27 13:02:51.179146 kubelet[2952]: E0127 13:02:51.178878 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.179146 kubelet[2952]: W0127 13:02:51.178903 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.179146 kubelet[2952]: E0127 13:02:51.178929 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.179146 kubelet[2952]: I0127 13:02:51.178953 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8a652343-1e00-4d74-90a4-253edca0200b-varrun\") pod \"csi-node-driver-gghgq\" (UID: \"8a652343-1e00-4d74-90a4-253edca0200b\") " pod="calico-system/csi-node-driver-gghgq" Jan 27 13:02:51.179610 kubelet[2952]: E0127 13:02:51.179486 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.179610 kubelet[2952]: W0127 13:02:51.179506 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.179610 kubelet[2952]: E0127 13:02:51.179538 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.179610 kubelet[2952]: I0127 13:02:51.179561 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8a652343-1e00-4d74-90a4-253edca0200b-registration-dir\") pod \"csi-node-driver-gghgq\" (UID: \"8a652343-1e00-4d74-90a4-253edca0200b\") " pod="calico-system/csi-node-driver-gghgq" Jan 27 13:02:51.180988 kubelet[2952]: E0127 13:02:51.180946 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.181683 kubelet[2952]: W0127 13:02:51.181642 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.181752 kubelet[2952]: E0127 13:02:51.181692 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.183194 kubelet[2952]: E0127 13:02:51.182677 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.183194 kubelet[2952]: W0127 13:02:51.182697 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.183194 kubelet[2952]: E0127 13:02:51.182793 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.183194 kubelet[2952]: E0127 13:02:51.183008 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.183194 kubelet[2952]: W0127 13:02:51.183021 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.184494 kubelet[2952]: E0127 13:02:51.183669 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.185500 kubelet[2952]: E0127 13:02:51.184693 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.185500 kubelet[2952]: W0127 13:02:51.184713 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.185500 kubelet[2952]: E0127 13:02:51.184913 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.185500 kubelet[2952]: I0127 13:02:51.184954 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf79d\" (UniqueName: \"kubernetes.io/projected/8a652343-1e00-4d74-90a4-253edca0200b-kube-api-access-gf79d\") pod \"csi-node-driver-gghgq\" (UID: \"8a652343-1e00-4d74-90a4-253edca0200b\") " pod="calico-system/csi-node-driver-gghgq" Jan 27 13:02:51.185730 kubelet[2952]: E0127 13:02:51.185598 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.185730 kubelet[2952]: W0127 13:02:51.185616 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.185730 kubelet[2952]: E0127 13:02:51.185711 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.186847 kubelet[2952]: E0127 13:02:51.186603 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.186847 kubelet[2952]: W0127 13:02:51.186637 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.186847 kubelet[2952]: E0127 13:02:51.186654 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.187304 kubelet[2952]: E0127 13:02:51.187268 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.187304 kubelet[2952]: W0127 13:02:51.187292 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.187304 kubelet[2952]: E0127 13:02:51.187316 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.187895 kubelet[2952]: E0127 13:02:51.187657 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.187895 kubelet[2952]: W0127 13:02:51.187674 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.187895 kubelet[2952]: E0127 13:02:51.187691 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.188357 kubelet[2952]: E0127 13:02:51.187964 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.188357 kubelet[2952]: W0127 13:02:51.187979 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.188357 kubelet[2952]: E0127 13:02:51.187994 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.189220 kubelet[2952]: E0127 13:02:51.188581 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.189220 kubelet[2952]: W0127 13:02:51.188599 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.189220 kubelet[2952]: E0127 13:02:51.188616 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.189220 kubelet[2952]: I0127 13:02:51.188651 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8a652343-1e00-4d74-90a4-253edca0200b-socket-dir\") pod \"csi-node-driver-gghgq\" (UID: \"8a652343-1e00-4d74-90a4-253edca0200b\") " pod="calico-system/csi-node-driver-gghgq" Jan 27 13:02:51.191764 kubelet[2952]: E0127 13:02:51.190486 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.191764 kubelet[2952]: W0127 13:02:51.190526 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.191764 kubelet[2952]: E0127 13:02:51.190549 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.192158 kubelet[2952]: E0127 13:02:51.192079 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.192158 kubelet[2952]: W0127 13:02:51.192100 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.192158 kubelet[2952]: E0127 13:02:51.192117 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.232169 containerd[1636]: time="2026-01-27T13:02:51.232097288Z" level=info msg="connecting to shim ec1434f5420ea5d7414bc4303495096f81daf6f7898caf8a5fcccfa5b86219e2" address="unix:///run/containerd/s/75a9cf18a4a92f53e92c4fa3ce6c8404456edbe313e21c78198f7090073ea95e" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:02:51.260545 containerd[1636]: time="2026-01-27T13:02:51.259168565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2wzqg,Uid:31788857-f0bc-43f7-8bad-13eb7c5b0605,Namespace:calico-system,Attempt:0,}" Jan 27 13:02:51.290866 kubelet[2952]: E0127 13:02:51.290501 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.290866 kubelet[2952]: W0127 13:02:51.290547 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.290866 kubelet[2952]: E0127 13:02:51.290576 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.291157 kubelet[2952]: E0127 13:02:51.291062 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.291157 kubelet[2952]: W0127 13:02:51.291081 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.291803 kubelet[2952]: E0127 13:02:51.291561 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.293739 kubelet[2952]: E0127 13:02:51.293703 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.293739 kubelet[2952]: W0127 13:02:51.293733 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.293951 kubelet[2952]: E0127 13:02:51.293756 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.294483 kubelet[2952]: E0127 13:02:51.294459 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.294483 kubelet[2952]: W0127 13:02:51.294478 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.295171 kubelet[2952]: E0127 13:02:51.294529 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.295171 kubelet[2952]: E0127 13:02:51.295070 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.295171 kubelet[2952]: W0127 13:02:51.295085 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.295575 kubelet[2952]: E0127 13:02:51.295502 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.295938 kubelet[2952]: W0127 13:02:51.295654 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.295938 kubelet[2952]: E0127 13:02:51.295555 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.295938 kubelet[2952]: E0127 13:02:51.295761 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.297238 kubelet[2952]: E0127 13:02:51.296346 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.297238 kubelet[2952]: W0127 13:02:51.296365 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.297532 kubelet[2952]: E0127 13:02:51.297431 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.298058 kubelet[2952]: E0127 13:02:51.298036 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.298058 kubelet[2952]: W0127 13:02:51.298057 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.298582 kubelet[2952]: E0127 13:02:51.298555 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.300543 kubelet[2952]: E0127 13:02:51.299333 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.300543 kubelet[2952]: W0127 13:02:51.299354 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.300543 kubelet[2952]: E0127 13:02:51.300258 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.301234 kubelet[2952]: E0127 13:02:51.301004 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.301234 kubelet[2952]: W0127 13:02:51.301019 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.301386 kubelet[2952]: E0127 13:02:51.301346 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.301755 kubelet[2952]: E0127 13:02:51.301732 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.301834 kubelet[2952]: W0127 13:02:51.301784 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.303481 kubelet[2952]: E0127 13:02:51.303452 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.303758 kubelet[2952]: E0127 13:02:51.303702 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.303758 kubelet[2952]: W0127 13:02:51.303721 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.303903 kubelet[2952]: E0127 13:02:51.303841 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.304497 kubelet[2952]: E0127 13:02:51.304409 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.304497 kubelet[2952]: W0127 13:02:51.304429 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.304497 kubelet[2952]: E0127 13:02:51.304465 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.304978 kubelet[2952]: E0127 13:02:51.304938 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.304978 kubelet[2952]: W0127 13:02:51.304962 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.304978 kubelet[2952]: E0127 13:02:51.304998 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.306731 kubelet[2952]: E0127 13:02:51.306703 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.306731 kubelet[2952]: W0127 13:02:51.306726 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.307080 kubelet[2952]: E0127 13:02:51.306866 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.307761 kubelet[2952]: E0127 13:02:51.307741 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.307761 kubelet[2952]: W0127 13:02:51.307760 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.308023 kubelet[2952]: E0127 13:02:51.307855 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.308080 kubelet[2952]: E0127 13:02:51.308063 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.308080 kubelet[2952]: W0127 13:02:51.308075 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.308282 kubelet[2952]: E0127 13:02:51.308252 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.308379 kubelet[2952]: E0127 13:02:51.308336 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.308379 kubelet[2952]: W0127 13:02:51.308354 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.308758 kubelet[2952]: E0127 13:02:51.308685 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.309709 kubelet[2952]: E0127 13:02:51.309629 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.309709 kubelet[2952]: W0127 13:02:51.309648 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.310279 kubelet[2952]: E0127 13:02:51.309893 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.310971 kubelet[2952]: E0127 13:02:51.310844 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.310971 kubelet[2952]: W0127 13:02:51.310863 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.310971 kubelet[2952]: E0127 13:02:51.310961 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.312003 kubelet[2952]: E0127 13:02:51.311939 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.312003 kubelet[2952]: W0127 13:02:51.311963 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.312822 kubelet[2952]: E0127 13:02:51.312766 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.312822 kubelet[2952]: E0127 13:02:51.312788 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.312822 kubelet[2952]: W0127 13:02:51.312803 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.313137 kubelet[2952]: E0127 13:02:51.313064 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.313423 kubelet[2952]: E0127 13:02:51.313198 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.313423 kubelet[2952]: W0127 13:02:51.313228 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.314252 kubelet[2952]: E0127 13:02:51.314023 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.314772 kubelet[2952]: E0127 13:02:51.314611 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.314772 kubelet[2952]: W0127 13:02:51.314629 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.314772 kubelet[2952]: E0127 13:02:51.314652 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.318092 kubelet[2952]: E0127 13:02:51.317966 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.318092 kubelet[2952]: W0127 13:02:51.317988 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.318092 kubelet[2952]: E0127 13:02:51.318005 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.336757 containerd[1636]: time="2026-01-27T13:02:51.336695975Z" level=info msg="connecting to shim 51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000" address="unix:///run/containerd/s/638f364455e74538c80f207de30cfba9af9c8bdbbbb328e71ccc1e5cbb5c9384" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:02:51.360853 kubelet[2952]: E0127 13:02:51.360798 2952 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 27 13:02:51.360853 kubelet[2952]: W0127 13:02:51.360842 2952 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 27 13:02:51.361107 kubelet[2952]: E0127 13:02:51.360872 2952 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 27 13:02:51.402025 systemd[1]: Started cri-containerd-ec1434f5420ea5d7414bc4303495096f81daf6f7898caf8a5fcccfa5b86219e2.scope - libcontainer container ec1434f5420ea5d7414bc4303495096f81daf6f7898caf8a5fcccfa5b86219e2. Jan 27 13:02:51.425535 systemd[1]: Started cri-containerd-51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000.scope - libcontainer container 51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000. Jan 27 13:02:51.476000 audit: BPF prog-id=155 op=LOAD Jan 27 13:02:51.478000 audit: BPF prog-id=156 op=LOAD Jan 27 13:02:51.478000 audit[3426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3415 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563313433346635343230656135643734313462633433303334393530 Jan 27 13:02:51.478000 audit: BPF prog-id=156 op=UNLOAD Jan 27 13:02:51.478000 audit[3426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563313433346635343230656135643734313462633433303334393530 Jan 27 13:02:51.479000 audit: BPF prog-id=157 op=LOAD Jan 27 13:02:51.479000 audit[3426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3415 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563313433346635343230656135643734313462633433303334393530 Jan 27 13:02:51.479000 audit: BPF prog-id=158 op=LOAD Jan 27 13:02:51.479000 audit[3426]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3415 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563313433346635343230656135643734313462633433303334393530 Jan 27 13:02:51.479000 audit: BPF prog-id=158 op=UNLOAD Jan 27 13:02:51.479000 audit[3426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563313433346635343230656135643734313462633433303334393530 Jan 27 13:02:51.479000 audit: BPF prog-id=157 op=UNLOAD Jan 27 13:02:51.479000 audit[3426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563313433346635343230656135643734313462633433303334393530 Jan 27 13:02:51.479000 audit: BPF prog-id=159 op=LOAD Jan 27 13:02:51.479000 audit[3426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3415 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563313433346635343230656135643734313462633433303334393530 Jan 27 13:02:51.481000 audit: BPF prog-id=160 op=LOAD Jan 27 13:02:51.484000 audit: BPF prog-id=161 op=LOAD Jan 27 13:02:51.484000 audit[3485]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3466 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531666662353839633133313132623930646164623937636162333230 Jan 27 13:02:51.484000 audit: BPF prog-id=161 op=UNLOAD Jan 27 13:02:51.484000 audit[3485]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531666662353839633133313132623930646164623937636162333230 Jan 27 13:02:51.485000 audit: BPF prog-id=162 op=LOAD Jan 27 13:02:51.485000 audit[3485]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3466 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531666662353839633133313132623930646164623937636162333230 Jan 27 13:02:51.485000 audit: BPF prog-id=163 op=LOAD Jan 27 13:02:51.485000 audit[3485]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3466 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531666662353839633133313132623930646164623937636162333230 Jan 27 13:02:51.485000 audit: BPF prog-id=163 op=UNLOAD Jan 27 13:02:51.485000 audit[3485]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531666662353839633133313132623930646164623937636162333230 Jan 27 13:02:51.486000 audit: BPF prog-id=162 op=UNLOAD Jan 27 13:02:51.486000 audit[3485]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531666662353839633133313132623930646164623937636162333230 Jan 27 13:02:51.487000 audit: BPF prog-id=164 op=LOAD Jan 27 13:02:51.487000 audit[3485]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3466 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531666662353839633133313132623930646164623937636162333230 Jan 27 13:02:51.523743 containerd[1636]: time="2026-01-27T13:02:51.523684818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2wzqg,Uid:31788857-f0bc-43f7-8bad-13eb7c5b0605,Namespace:calico-system,Attempt:0,} returns sandbox id \"51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000\"" Jan 27 13:02:51.527095 containerd[1636]: time="2026-01-27T13:02:51.526937339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 27 13:02:51.568606 containerd[1636]: time="2026-01-27T13:02:51.568475802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7577bbd776-t6xvg,Uid:f698f771-dc5d-4e33-803b-e88c6ed75e62,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec1434f5420ea5d7414bc4303495096f81daf6f7898caf8a5fcccfa5b86219e2\"" Jan 27 13:02:51.803000 audit[3527]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:51.803000 audit[3527]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdef3e98d0 a2=0 a3=7ffdef3e98bc items=0 ppid=3095 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:51.808000 audit[3527]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:51.808000 audit[3527]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdef3e98d0 a2=0 a3=0 items=0 ppid=3095 pid=3527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:51.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:52.694307 kubelet[2952]: E0127 13:02:52.694198 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:02:53.026543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2451775627.mount: Deactivated successfully. Jan 27 13:02:53.465545 containerd[1636]: time="2026-01-27T13:02:53.465447490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:53.467899 containerd[1636]: time="2026-01-27T13:02:53.467859110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 27 13:02:53.468768 containerd[1636]: time="2026-01-27T13:02:53.468684631Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:53.473545 containerd[1636]: time="2026-01-27T13:02:53.473410219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:53.475652 containerd[1636]: time="2026-01-27T13:02:53.475593200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.948462226s" Jan 27 13:02:53.475721 containerd[1636]: time="2026-01-27T13:02:53.475651912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 27 13:02:53.478062 containerd[1636]: time="2026-01-27T13:02:53.478022811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 27 13:02:53.481587 containerd[1636]: time="2026-01-27T13:02:53.480850803Z" level=info msg="CreateContainer within sandbox \"51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 27 13:02:53.495924 containerd[1636]: time="2026-01-27T13:02:53.495873694Z" level=info msg="Container ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:02:53.500990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139251106.mount: Deactivated successfully. Jan 27 13:02:53.509156 containerd[1636]: time="2026-01-27T13:02:53.509077904Z" level=info msg="CreateContainer within sandbox \"51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de\"" Jan 27 13:02:53.510461 containerd[1636]: time="2026-01-27T13:02:53.510374362Z" level=info msg="StartContainer for \"ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de\"" Jan 27 13:02:53.513249 containerd[1636]: time="2026-01-27T13:02:53.513198732Z" level=info msg="connecting to shim ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de" address="unix:///run/containerd/s/638f364455e74538c80f207de30cfba9af9c8bdbbbb328e71ccc1e5cbb5c9384" protocol=ttrpc version=3 Jan 27 13:02:53.551800 systemd[1]: Started cri-containerd-ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de.scope - libcontainer container ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de. Jan 27 13:02:53.653000 audit: BPF prog-id=165 op=LOAD Jan 27 13:02:53.653000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3466 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:53.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165363037343065313736633635323864303530643430313530373238 Jan 27 13:02:53.654000 audit: BPF prog-id=166 op=LOAD Jan 27 13:02:53.654000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3466 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:53.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165363037343065313736633635323864303530643430313530373238 Jan 27 13:02:53.654000 audit: BPF prog-id=166 op=UNLOAD Jan 27 13:02:53.654000 audit[3536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:53.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165363037343065313736633635323864303530643430313530373238 Jan 27 13:02:53.654000 audit: BPF prog-id=165 op=UNLOAD Jan 27 13:02:53.654000 audit[3536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:53.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165363037343065313736633635323864303530643430313530373238 Jan 27 13:02:53.654000 audit: BPF prog-id=167 op=LOAD Jan 27 13:02:53.654000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3466 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:53.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165363037343065313736633635323864303530643430313530373238 Jan 27 13:02:53.712047 containerd[1636]: time="2026-01-27T13:02:53.711988407Z" level=info msg="StartContainer for \"ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de\" returns successfully" Jan 27 13:02:53.733097 systemd[1]: cri-containerd-ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de.scope: Deactivated successfully. Jan 27 13:02:53.736000 audit: BPF prog-id=167 op=UNLOAD Jan 27 13:02:53.778043 containerd[1636]: time="2026-01-27T13:02:53.777976158Z" level=info msg="received container exit event container_id:\"ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de\" id:\"ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de\" pid:3548 exited_at:{seconds:1769518973 nanos:739886830}" Jan 27 13:02:53.830186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae60740e176c6528d050d40150728451176a39e147b1eb008f2572ffa5d076de-rootfs.mount: Deactivated successfully. Jan 27 13:02:54.694988 kubelet[2952]: E0127 13:02:54.694913 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:02:56.695434 kubelet[2952]: E0127 13:02:56.695339 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:02:57.642274 containerd[1636]: time="2026-01-27T13:02:57.642200911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:57.645604 containerd[1636]: time="2026-01-27T13:02:57.645224384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 27 13:02:57.650056 containerd[1636]: time="2026-01-27T13:02:57.649755865Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:57.679241 containerd[1636]: time="2026-01-27T13:02:57.677826157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:02:57.679524 containerd[1636]: time="2026-01-27T13:02:57.679303039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.201230963s" Jan 27 13:02:57.679524 containerd[1636]: time="2026-01-27T13:02:57.679365171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 27 13:02:57.681360 containerd[1636]: time="2026-01-27T13:02:57.681328622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 27 13:02:57.757930 containerd[1636]: time="2026-01-27T13:02:57.757554864Z" level=info msg="CreateContainer within sandbox \"ec1434f5420ea5d7414bc4303495096f81daf6f7898caf8a5fcccfa5b86219e2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 27 13:02:57.813171 containerd[1636]: time="2026-01-27T13:02:57.813088308Z" level=info msg="Container 195028c3f3cd6f2cfc77efacbf944f5a43d8c6e39ea7dd4ca6537c973e2674f1: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:02:57.819648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863648457.mount: Deactivated successfully. Jan 27 13:02:57.828258 containerd[1636]: time="2026-01-27T13:02:57.828199011Z" level=info msg="CreateContainer within sandbox \"ec1434f5420ea5d7414bc4303495096f81daf6f7898caf8a5fcccfa5b86219e2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"195028c3f3cd6f2cfc77efacbf944f5a43d8c6e39ea7dd4ca6537c973e2674f1\"" Jan 27 13:02:57.829742 containerd[1636]: time="2026-01-27T13:02:57.829673881Z" level=info msg="StartContainer for \"195028c3f3cd6f2cfc77efacbf944f5a43d8c6e39ea7dd4ca6537c973e2674f1\"" Jan 27 13:02:57.834774 containerd[1636]: time="2026-01-27T13:02:57.834680631Z" level=info msg="connecting to shim 195028c3f3cd6f2cfc77efacbf944f5a43d8c6e39ea7dd4ca6537c973e2674f1" address="unix:///run/containerd/s/75a9cf18a4a92f53e92c4fa3ce6c8404456edbe313e21c78198f7090073ea95e" protocol=ttrpc version=3 Jan 27 13:02:57.891068 systemd[1]: Started cri-containerd-195028c3f3cd6f2cfc77efacbf944f5a43d8c6e39ea7dd4ca6537c973e2674f1.scope - libcontainer container 195028c3f3cd6f2cfc77efacbf944f5a43d8c6e39ea7dd4ca6537c973e2674f1. Jan 27 13:02:57.937725 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 27 13:02:57.939896 kernel: audit: type=1334 audit(1769518977.930:563): prog-id=168 op=LOAD Jan 27 13:02:57.930000 audit: BPF prog-id=168 op=LOAD Jan 27 13:02:57.944000 audit: BPF prog-id=169 op=LOAD Jan 27 13:02:57.944000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.949586 kernel: audit: type=1334 audit(1769518977.944:564): prog-id=169 op=LOAD Jan 27 13:02:57.949713 kernel: audit: type=1300 audit(1769518977.944:564): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.955046 kernel: audit: type=1327 audit(1769518977.944:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.958916 kernel: audit: type=1334 audit(1769518977.946:565): prog-id=169 op=UNLOAD Jan 27 13:02:57.946000 audit: BPF prog-id=169 op=UNLOAD Jan 27 13:02:57.946000 audit[3592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.962658 kernel: audit: type=1300 audit(1769518977.946:565): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.968069 kernel: audit: type=1327 audit(1769518977.946:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.946000 audit: BPF prog-id=170 op=LOAD Jan 27 13:02:57.975540 kernel: audit: type=1334 audit(1769518977.946:566): prog-id=170 op=LOAD Jan 27 13:02:57.946000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.984358 kernel: audit: type=1300 audit(1769518977.946:566): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.984464 kernel: audit: type=1327 audit(1769518977.946:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.946000 audit: BPF prog-id=171 op=LOAD Jan 27 13:02:57.946000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.946000 audit: BPF prog-id=171 op=UNLOAD Jan 27 13:02:57.946000 audit[3592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.946000 audit: BPF prog-id=170 op=UNLOAD Jan 27 13:02:57.946000 audit[3592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:57.946000 audit: BPF prog-id=172 op=LOAD Jan 27 13:02:57.946000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3415 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:57.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353032386333663363643666326366633737656661636266393434 Jan 27 13:02:58.061400 containerd[1636]: time="2026-01-27T13:02:58.061172093Z" level=info msg="StartContainer for \"195028c3f3cd6f2cfc77efacbf944f5a43d8c6e39ea7dd4ca6537c973e2674f1\" returns successfully" Jan 27 13:02:58.694759 kubelet[2952]: E0127 13:02:58.694630 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:02:58.950170 kubelet[2952]: I0127 13:02:58.948824 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7577bbd776-t6xvg" podStartSLOduration=2.839307152 podStartE2EDuration="8.947613579s" podCreationTimestamp="2026-01-27 13:02:50 +0000 UTC" firstStartedPulling="2026-01-27 13:02:51.572885555 +0000 UTC m=+26.147227710" lastFinishedPulling="2026-01-27 13:02:57.68119198 +0000 UTC m=+32.255534137" observedRunningTime="2026-01-27 13:02:58.943973059 +0000 UTC m=+33.518315246" watchObservedRunningTime="2026-01-27 13:02:58.947613579 +0000 UTC m=+33.521955768" Jan 27 13:02:59.864000 audit[3629]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:59.864000 audit[3629]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedad9d3e0 a2=0 a3=7ffedad9d3cc items=0 ppid=3095 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:59.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:02:59.865000 audit[3629]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:02:59.865000 audit[3629]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffedad9d3e0 a2=0 a3=7ffedad9d3cc items=0 ppid=3095 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:02:59.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:00.695142 kubelet[2952]: E0127 13:03:00.694934 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:03:02.695129 kubelet[2952]: E0127 13:03:02.695062 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:03:04.700101 kubelet[2952]: E0127 13:03:04.699842 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:03:04.860465 containerd[1636]: time="2026-01-27T13:03:04.860091544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:03:04.861743 containerd[1636]: time="2026-01-27T13:03:04.861703477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 27 13:03:04.862242 containerd[1636]: time="2026-01-27T13:03:04.862209460Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:03:04.869947 containerd[1636]: time="2026-01-27T13:03:04.869869722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:03:04.871312 containerd[1636]: time="2026-01-27T13:03:04.871263146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.18988428s" Jan 27 13:03:04.871379 containerd[1636]: time="2026-01-27T13:03:04.871320692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 27 13:03:04.880047 containerd[1636]: time="2026-01-27T13:03:04.879983726Z" level=info msg="CreateContainer within sandbox \"51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 27 13:03:04.895954 containerd[1636]: time="2026-01-27T13:03:04.895872644Z" level=info msg="Container a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:03:04.918957 containerd[1636]: time="2026-01-27T13:03:04.918897970Z" level=info msg="CreateContainer within sandbox \"51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6\"" Jan 27 13:03:04.920261 containerd[1636]: time="2026-01-27T13:03:04.920056068Z" level=info msg="StartContainer for \"a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6\"" Jan 27 13:03:04.922852 containerd[1636]: time="2026-01-27T13:03:04.922807073Z" level=info msg="connecting to shim a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6" address="unix:///run/containerd/s/638f364455e74538c80f207de30cfba9af9c8bdbbbb328e71ccc1e5cbb5c9384" protocol=ttrpc version=3 Jan 27 13:03:04.993991 systemd[1]: Started cri-containerd-a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6.scope - libcontainer container a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6. Jan 27 13:03:05.082481 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 27 13:03:05.082965 kernel: audit: type=1334 audit(1769518985.072:573): prog-id=173 op=LOAD Jan 27 13:03:05.072000 audit: BPF prog-id=173 op=LOAD Jan 27 13:03:05.072000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3466 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:05.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138306533373438666134626639646232633266643830326264616538 Jan 27 13:03:05.092010 kernel: audit: type=1300 audit(1769518985.072:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3466 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:05.092079 kernel: audit: type=1327 audit(1769518985.072:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138306533373438666134626639646232633266643830326264616538 Jan 27 13:03:05.072000 audit: BPF prog-id=174 op=LOAD Jan 27 13:03:05.095873 kernel: audit: type=1334 audit(1769518985.072:574): prog-id=174 op=LOAD Jan 27 13:03:05.072000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3466 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:05.098607 kernel: audit: type=1300 audit(1769518985.072:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3466 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:05.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138306533373438666134626639646232633266643830326264616538 Jan 27 13:03:05.104145 kernel: audit: type=1327 audit(1769518985.072:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138306533373438666134626639646232633266643830326264616538 Jan 27 13:03:05.107747 kernel: audit: type=1334 audit(1769518985.073:575): prog-id=174 op=UNLOAD Jan 27 13:03:05.073000 audit: BPF prog-id=174 op=UNLOAD Jan 27 13:03:05.073000 audit[3643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:05.116558 kernel: audit: type=1300 audit(1769518985.073:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:05.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138306533373438666134626639646232633266643830326264616538 Jan 27 13:03:05.124959 kernel: audit: type=1327 audit(1769518985.073:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138306533373438666134626639646232633266643830326264616538 Jan 27 13:03:05.079000 audit: BPF prog-id=173 op=UNLOAD Jan 27 13:03:05.130549 kernel: audit: type=1334 audit(1769518985.079:576): prog-id=173 op=UNLOAD Jan 27 13:03:05.079000 audit[3643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:05.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138306533373438666134626639646232633266643830326264616538 Jan 27 13:03:05.079000 audit: BPF prog-id=175 op=LOAD Jan 27 13:03:05.079000 audit[3643]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3466 pid=3643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:05.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138306533373438666134626639646232633266643830326264616538 Jan 27 13:03:05.168764 containerd[1636]: time="2026-01-27T13:03:05.168689123Z" level=info msg="StartContainer for \"a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6\" returns successfully" Jan 27 13:03:06.410908 systemd[1]: cri-containerd-a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6.scope: Deactivated successfully. Jan 27 13:03:06.411717 systemd[1]: cri-containerd-a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6.scope: Consumed 1.015s CPU time, 163.2M memory peak, 6.3M read from disk, 171.3M written to disk. Jan 27 13:03:06.413000 audit: BPF prog-id=175 op=UNLOAD Jan 27 13:03:06.419865 containerd[1636]: time="2026-01-27T13:03:06.419781637Z" level=info msg="received container exit event container_id:\"a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6\" id:\"a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6\" pid:3655 exited_at:{seconds:1769518986 nanos:419250710}" Jan 27 13:03:06.497192 kubelet[2952]: I0127 13:03:06.497121 2952 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 27 13:03:06.514025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a80e3748fa4bf9db2c2fd802bdae8205761e0e2e50c5bbd22ea9392650a22bc6-rootfs.mount: Deactivated successfully. Jan 27 13:03:06.659612 systemd[1]: Created slice kubepods-burstable-pod37d5fe65_0a05_46a2_aa2f_7ad5352003bd.slice - libcontainer container kubepods-burstable-pod37d5fe65_0a05_46a2_aa2f_7ad5352003bd.slice. Jan 27 13:03:06.685120 kubelet[2952]: W0127 13:03:06.683406 2952 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-4nwk8.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-4nwk8.gb1.brightbox.com' and this object Jan 27 13:03:06.687482 kubelet[2952]: E0127 13:03:06.685238 2952 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-4nwk8.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-4nwk8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 27 13:03:06.687482 kubelet[2952]: W0127 13:03:06.685807 2952 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-4nwk8.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-4nwk8.gb1.brightbox.com' and this object Jan 27 13:03:06.687482 kubelet[2952]: E0127 13:03:06.686793 2952 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-4nwk8.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-4nwk8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 27 13:03:06.691469 systemd[1]: Created slice kubepods-besteffort-podd20e3435_24a0_4d45_b1d0_2db2610f07b9.slice - libcontainer container kubepods-besteffort-podd20e3435_24a0_4d45_b1d0_2db2610f07b9.slice. Jan 27 13:03:06.712168 systemd[1]: Created slice kubepods-besteffort-pod31e32b52_76dd_4c4a_b037_c1818999e71b.slice - libcontainer container kubepods-besteffort-pod31e32b52_76dd_4c4a_b037_c1818999e71b.slice. Jan 27 13:03:06.727372 systemd[1]: Created slice kubepods-burstable-pod74f07544_3ec6_434d_a104_1f67b11cb370.slice - libcontainer container kubepods-burstable-pod74f07544_3ec6_434d_a104_1f67b11cb370.slice. Jan 27 13:03:06.745730 kubelet[2952]: I0127 13:03:06.745661 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cwvl\" (UniqueName: \"kubernetes.io/projected/37d5fe65-0a05-46a2-aa2f-7ad5352003bd-kube-api-access-8cwvl\") pod \"coredns-668d6bf9bc-vrljx\" (UID: \"37d5fe65-0a05-46a2-aa2f-7ad5352003bd\") " pod="kube-system/coredns-668d6bf9bc-vrljx" Jan 27 13:03:06.745730 kubelet[2952]: I0127 13:03:06.745732 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/49dcb295-61bb-47ac-9721-51e5abeacfeb-calico-apiserver-certs\") pod \"calico-apiserver-675cb5c68f-lgkj9\" (UID: \"49dcb295-61bb-47ac-9721-51e5abeacfeb\") " pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" Jan 27 13:03:06.746048 kubelet[2952]: I0127 13:03:06.745791 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31e32b52-76dd-4c4a-b037-c1818999e71b-calico-apiserver-certs\") pod \"calico-apiserver-675cb5c68f-6grht\" (UID: \"31e32b52-76dd-4c4a-b037-c1818999e71b\") " pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" Jan 27 13:03:06.746048 kubelet[2952]: I0127 13:03:06.745846 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzkh2\" (UniqueName: \"kubernetes.io/projected/31e32b52-76dd-4c4a-b037-c1818999e71b-kube-api-access-dzkh2\") pod \"calico-apiserver-675cb5c68f-6grht\" (UID: \"31e32b52-76dd-4c4a-b037-c1818999e71b\") " pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" Jan 27 13:03:06.746048 kubelet[2952]: I0127 13:03:06.745907 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d20e3435-24a0-4d45-b1d0-2db2610f07b9-tigera-ca-bundle\") pod \"calico-kube-controllers-85cdccb5f6-wx4tb\" (UID: \"d20e3435-24a0-4d45-b1d0-2db2610f07b9\") " pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" Jan 27 13:03:06.746048 kubelet[2952]: I0127 13:03:06.745936 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrmb\" (UniqueName: \"kubernetes.io/projected/d20e3435-24a0-4d45-b1d0-2db2610f07b9-kube-api-access-jfrmb\") pod \"calico-kube-controllers-85cdccb5f6-wx4tb\" (UID: \"d20e3435-24a0-4d45-b1d0-2db2610f07b9\") " pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" Jan 27 13:03:06.746048 kubelet[2952]: I0127 13:03:06.745969 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m72dc\" (UniqueName: \"kubernetes.io/projected/edbc19c4-c5a2-4875-9a7b-5c829dca568c-kube-api-access-m72dc\") pod \"goldmane-666569f655-kpqqc\" (UID: \"edbc19c4-c5a2-4875-9a7b-5c829dca568c\") " pod="calico-system/goldmane-666569f655-kpqqc" Jan 27 13:03:06.746334 kubelet[2952]: I0127 13:03:06.745996 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbc19c4-c5a2-4875-9a7b-5c829dca568c-config\") pod \"goldmane-666569f655-kpqqc\" (UID: \"edbc19c4-c5a2-4875-9a7b-5c829dca568c\") " pod="calico-system/goldmane-666569f655-kpqqc" Jan 27 13:03:06.746334 kubelet[2952]: I0127 13:03:06.746040 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8569123c-abee-43a6-aac4-12a05912eeb0-whisker-backend-key-pair\") pod \"whisker-5dd56df978-nczgt\" (UID: \"8569123c-abee-43a6-aac4-12a05912eeb0\") " pod="calico-system/whisker-5dd56df978-nczgt" Jan 27 13:03:06.746334 kubelet[2952]: I0127 13:03:06.746093 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37d5fe65-0a05-46a2-aa2f-7ad5352003bd-config-volume\") pod \"coredns-668d6bf9bc-vrljx\" (UID: \"37d5fe65-0a05-46a2-aa2f-7ad5352003bd\") " pod="kube-system/coredns-668d6bf9bc-vrljx" Jan 27 13:03:06.746334 kubelet[2952]: I0127 13:03:06.746131 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8569123c-abee-43a6-aac4-12a05912eeb0-whisker-ca-bundle\") pod \"whisker-5dd56df978-nczgt\" (UID: \"8569123c-abee-43a6-aac4-12a05912eeb0\") " pod="calico-system/whisker-5dd56df978-nczgt" Jan 27 13:03:06.746334 kubelet[2952]: I0127 13:03:06.746165 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/edbc19c4-c5a2-4875-9a7b-5c829dca568c-goldmane-key-pair\") pod \"goldmane-666569f655-kpqqc\" (UID: \"edbc19c4-c5a2-4875-9a7b-5c829dca568c\") " pod="calico-system/goldmane-666569f655-kpqqc" Jan 27 13:03:06.748717 kubelet[2952]: I0127 13:03:06.746225 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qljc8\" (UniqueName: \"kubernetes.io/projected/74f07544-3ec6-434d-a104-1f67b11cb370-kube-api-access-qljc8\") pod \"coredns-668d6bf9bc-cbp4m\" (UID: \"74f07544-3ec6-434d-a104-1f67b11cb370\") " pod="kube-system/coredns-668d6bf9bc-cbp4m" Jan 27 13:03:06.748717 kubelet[2952]: I0127 13:03:06.746256 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnq8t\" (UniqueName: \"kubernetes.io/projected/8569123c-abee-43a6-aac4-12a05912eeb0-kube-api-access-tnq8t\") pod \"whisker-5dd56df978-nczgt\" (UID: \"8569123c-abee-43a6-aac4-12a05912eeb0\") " pod="calico-system/whisker-5dd56df978-nczgt" Jan 27 13:03:06.748717 kubelet[2952]: I0127 13:03:06.746293 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xtr\" (UniqueName: \"kubernetes.io/projected/49dcb295-61bb-47ac-9721-51e5abeacfeb-kube-api-access-s8xtr\") pod \"calico-apiserver-675cb5c68f-lgkj9\" (UID: \"49dcb295-61bb-47ac-9721-51e5abeacfeb\") " pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" Jan 27 13:03:06.748717 kubelet[2952]: I0127 13:03:06.746325 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edbc19c4-c5a2-4875-9a7b-5c829dca568c-goldmane-ca-bundle\") pod \"goldmane-666569f655-kpqqc\" (UID: \"edbc19c4-c5a2-4875-9a7b-5c829dca568c\") " pod="calico-system/goldmane-666569f655-kpqqc" Jan 27 13:03:06.748717 kubelet[2952]: I0127 13:03:06.746361 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74f07544-3ec6-434d-a104-1f67b11cb370-config-volume\") pod \"coredns-668d6bf9bc-cbp4m\" (UID: \"74f07544-3ec6-434d-a104-1f67b11cb370\") " pod="kube-system/coredns-668d6bf9bc-cbp4m" Jan 27 13:03:06.755930 systemd[1]: Created slice kubepods-besteffort-podedbc19c4_c5a2_4875_9a7b_5c829dca568c.slice - libcontainer container kubepods-besteffort-podedbc19c4_c5a2_4875_9a7b_5c829dca568c.slice. Jan 27 13:03:06.770964 systemd[1]: Created slice kubepods-besteffort-pod49dcb295_61bb_47ac_9721_51e5abeacfeb.slice - libcontainer container kubepods-besteffort-pod49dcb295_61bb_47ac_9721_51e5abeacfeb.slice. Jan 27 13:03:06.787256 systemd[1]: Created slice kubepods-besteffort-pod8569123c_abee_43a6_aac4_12a05912eeb0.slice - libcontainer container kubepods-besteffort-pod8569123c_abee_43a6_aac4_12a05912eeb0.slice. Jan 27 13:03:06.802015 systemd[1]: Created slice kubepods-besteffort-pod8a652343_1e00_4d74_90a4_253edca0200b.slice - libcontainer container kubepods-besteffort-pod8a652343_1e00_4d74_90a4_253edca0200b.slice. Jan 27 13:03:06.807363 containerd[1636]: time="2026-01-27T13:03:06.807313057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gghgq,Uid:8a652343-1e00-4d74-90a4-253edca0200b,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:06.985208 containerd[1636]: time="2026-01-27T13:03:06.985033624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrljx,Uid:37d5fe65-0a05-46a2-aa2f-7ad5352003bd,Namespace:kube-system,Attempt:0,}" Jan 27 13:03:07.004916 containerd[1636]: time="2026-01-27T13:03:07.004719479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cdccb5f6-wx4tb,Uid:d20e3435-24a0-4d45-b1d0-2db2610f07b9,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:07.039363 containerd[1636]: time="2026-01-27T13:03:07.037827950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 27 13:03:07.043622 containerd[1636]: time="2026-01-27T13:03:07.041789619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp4m,Uid:74f07544-3ec6-434d-a104-1f67b11cb370,Namespace:kube-system,Attempt:0,}" Jan 27 13:03:07.069745 containerd[1636]: time="2026-01-27T13:03:07.069686515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kpqqc,Uid:edbc19c4-c5a2-4875-9a7b-5c829dca568c,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:07.098287 containerd[1636]: time="2026-01-27T13:03:07.098225676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd56df978-nczgt,Uid:8569123c-abee-43a6-aac4-12a05912eeb0,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:07.342858 containerd[1636]: time="2026-01-27T13:03:07.342645102Z" level=error msg="Failed to destroy network for sandbox \"3e03155e39ddd1d671371f7e834aba91b546d6f485a55e5cb666af46458ee182\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.346648 containerd[1636]: time="2026-01-27T13:03:07.346449533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gghgq,Uid:8a652343-1e00-4d74-90a4-253edca0200b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e03155e39ddd1d671371f7e834aba91b546d6f485a55e5cb666af46458ee182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.349569 kubelet[2952]: E0127 13:03:07.348266 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e03155e39ddd1d671371f7e834aba91b546d6f485a55e5cb666af46458ee182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.349569 kubelet[2952]: E0127 13:03:07.348411 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e03155e39ddd1d671371f7e834aba91b546d6f485a55e5cb666af46458ee182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gghgq" Jan 27 13:03:07.352571 kubelet[2952]: E0127 13:03:07.350159 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e03155e39ddd1d671371f7e834aba91b546d6f485a55e5cb666af46458ee182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gghgq" Jan 27 13:03:07.352571 kubelet[2952]: E0127 13:03:07.351126 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e03155e39ddd1d671371f7e834aba91b546d6f485a55e5cb666af46458ee182\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:03:07.354048 containerd[1636]: time="2026-01-27T13:03:07.353990846Z" level=error msg="Failed to destroy network for sandbox \"eced34071df79b91d81ee202215ad75900b423a4e129100ed1dd7721178af2a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.363139 containerd[1636]: time="2026-01-27T13:03:07.362761635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp4m,Uid:74f07544-3ec6-434d-a104-1f67b11cb370,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eced34071df79b91d81ee202215ad75900b423a4e129100ed1dd7721178af2a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.363278 kubelet[2952]: E0127 13:03:07.363035 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eced34071df79b91d81ee202215ad75900b423a4e129100ed1dd7721178af2a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.363278 kubelet[2952]: E0127 13:03:07.363095 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eced34071df79b91d81ee202215ad75900b423a4e129100ed1dd7721178af2a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbp4m" Jan 27 13:03:07.363278 kubelet[2952]: E0127 13:03:07.363125 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eced34071df79b91d81ee202215ad75900b423a4e129100ed1dd7721178af2a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbp4m" Jan 27 13:03:07.363551 kubelet[2952]: E0127 13:03:07.363189 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cbp4m_kube-system(74f07544-3ec6-434d-a104-1f67b11cb370)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cbp4m_kube-system(74f07544-3ec6-434d-a104-1f67b11cb370)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eced34071df79b91d81ee202215ad75900b423a4e129100ed1dd7721178af2a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cbp4m" podUID="74f07544-3ec6-434d-a104-1f67b11cb370" Jan 27 13:03:07.367300 containerd[1636]: time="2026-01-27T13:03:07.367247901Z" level=error msg="Failed to destroy network for sandbox \"ad4cf15e7a3ca4a1a219ea1b43ee014012782b38a42b8daa823933851d4b21f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.370283 containerd[1636]: time="2026-01-27T13:03:07.370241876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cdccb5f6-wx4tb,Uid:d20e3435-24a0-4d45-b1d0-2db2610f07b9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4cf15e7a3ca4a1a219ea1b43ee014012782b38a42b8daa823933851d4b21f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.371042 kubelet[2952]: E0127 13:03:07.370998 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4cf15e7a3ca4a1a219ea1b43ee014012782b38a42b8daa823933851d4b21f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.371127 kubelet[2952]: E0127 13:03:07.371063 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4cf15e7a3ca4a1a219ea1b43ee014012782b38a42b8daa823933851d4b21f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" Jan 27 13:03:07.371127 kubelet[2952]: E0127 13:03:07.371096 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4cf15e7a3ca4a1a219ea1b43ee014012782b38a42b8daa823933851d4b21f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" Jan 27 13:03:07.371231 kubelet[2952]: E0127 13:03:07.371148 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85cdccb5f6-wx4tb_calico-system(d20e3435-24a0-4d45-b1d0-2db2610f07b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85cdccb5f6-wx4tb_calico-system(d20e3435-24a0-4d45-b1d0-2db2610f07b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad4cf15e7a3ca4a1a219ea1b43ee014012782b38a42b8daa823933851d4b21f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:03:07.380999 containerd[1636]: time="2026-01-27T13:03:07.380757910Z" level=error msg="Failed to destroy network for sandbox \"891bb6131e24e0775023edff8da7cb281c3aa92556e3d3f9e4bf5a43a3664c37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.381797 containerd[1636]: time="2026-01-27T13:03:07.381739379Z" level=error msg="Failed to destroy network for sandbox \"7a500b6b6b6d0fdf12dacde83375983a0f768e905912253bc7a550f47d28d05d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.384565 containerd[1636]: time="2026-01-27T13:03:07.384477125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kpqqc,Uid:edbc19c4-c5a2-4875-9a7b-5c829dca568c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a500b6b6b6d0fdf12dacde83375983a0f768e905912253bc7a550f47d28d05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.385423 kubelet[2952]: E0127 13:03:07.385319 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a500b6b6b6d0fdf12dacde83375983a0f768e905912253bc7a550f47d28d05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.385510 kubelet[2952]: E0127 13:03:07.385417 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a500b6b6b6d0fdf12dacde83375983a0f768e905912253bc7a550f47d28d05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kpqqc" Jan 27 13:03:07.385510 kubelet[2952]: E0127 13:03:07.385449 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a500b6b6b6d0fdf12dacde83375983a0f768e905912253bc7a550f47d28d05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kpqqc" Jan 27 13:03:07.385663 kubelet[2952]: E0127 13:03:07.385524 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kpqqc_calico-system(edbc19c4-c5a2-4875-9a7b-5c829dca568c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kpqqc_calico-system(edbc19c4-c5a2-4875-9a7b-5c829dca568c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a500b6b6b6d0fdf12dacde83375983a0f768e905912253bc7a550f47d28d05d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:03:07.386028 containerd[1636]: time="2026-01-27T13:03:07.385761588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrljx,Uid:37d5fe65-0a05-46a2-aa2f-7ad5352003bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"891bb6131e24e0775023edff8da7cb281c3aa92556e3d3f9e4bf5a43a3664c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.386133 kubelet[2952]: E0127 13:03:07.385970 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"891bb6131e24e0775023edff8da7cb281c3aa92556e3d3f9e4bf5a43a3664c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.386133 kubelet[2952]: E0127 13:03:07.386010 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"891bb6131e24e0775023edff8da7cb281c3aa92556e3d3f9e4bf5a43a3664c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vrljx" Jan 27 13:03:07.386133 kubelet[2952]: E0127 13:03:07.386051 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"891bb6131e24e0775023edff8da7cb281c3aa92556e3d3f9e4bf5a43a3664c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vrljx" Jan 27 13:03:07.387059 kubelet[2952]: E0127 13:03:07.386093 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vrljx_kube-system(37d5fe65-0a05-46a2-aa2f-7ad5352003bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vrljx_kube-system(37d5fe65-0a05-46a2-aa2f-7ad5352003bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"891bb6131e24e0775023edff8da7cb281c3aa92556e3d3f9e4bf5a43a3664c37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vrljx" podUID="37d5fe65-0a05-46a2-aa2f-7ad5352003bd" Jan 27 13:03:07.404750 containerd[1636]: time="2026-01-27T13:03:07.404669731Z" level=error msg="Failed to destroy network for sandbox \"8efed694b380967b1b085289e71ef14fa98033526ac00759e5ac8a398db34821\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.407139 containerd[1636]: time="2026-01-27T13:03:07.407098907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd56df978-nczgt,Uid:8569123c-abee-43a6-aac4-12a05912eeb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efed694b380967b1b085289e71ef14fa98033526ac00759e5ac8a398db34821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.409023 kubelet[2952]: E0127 13:03:07.408974 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efed694b380967b1b085289e71ef14fa98033526ac00759e5ac8a398db34821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:07.409120 kubelet[2952]: E0127 13:03:07.409044 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efed694b380967b1b085289e71ef14fa98033526ac00759e5ac8a398db34821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd56df978-nczgt" Jan 27 13:03:07.409120 kubelet[2952]: E0127 13:03:07.409089 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8efed694b380967b1b085289e71ef14fa98033526ac00759e5ac8a398db34821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd56df978-nczgt" Jan 27 13:03:07.409226 kubelet[2952]: E0127 13:03:07.409157 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dd56df978-nczgt_calico-system(8569123c-abee-43a6-aac4-12a05912eeb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dd56df978-nczgt_calico-system(8569123c-abee-43a6-aac4-12a05912eeb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8efed694b380967b1b085289e71ef14fa98033526ac00759e5ac8a398db34821\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dd56df978-nczgt" podUID="8569123c-abee-43a6-aac4-12a05912eeb0" Jan 27 13:03:07.518990 systemd[1]: run-netns-cni\x2d2c48ed1b\x2d339f\x2d2c3e\x2d8a89\x2daa52b09432c6.mount: Deactivated successfully. Jan 27 13:03:07.888177 kubelet[2952]: E0127 13:03:07.887533 2952 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 27 13:03:07.888177 kubelet[2952]: E0127 13:03:07.887781 2952 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31e32b52-76dd-4c4a-b037-c1818999e71b-calico-apiserver-certs podName:31e32b52-76dd-4c4a-b037-c1818999e71b nodeName:}" failed. No retries permitted until 2026-01-27 13:03:08.38769522 +0000 UTC m=+42.962037380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/31e32b52-76dd-4c4a-b037-c1818999e71b-calico-apiserver-certs") pod "calico-apiserver-675cb5c68f-6grht" (UID: "31e32b52-76dd-4c4a-b037-c1818999e71b") : failed to sync secret cache: timed out waiting for the condition Jan 27 13:03:07.888177 kubelet[2952]: E0127 13:03:07.887555 2952 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 27 13:03:07.888177 kubelet[2952]: E0127 13:03:07.888130 2952 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49dcb295-61bb-47ac-9721-51e5abeacfeb-calico-apiserver-certs podName:49dcb295-61bb-47ac-9721-51e5abeacfeb nodeName:}" failed. No retries permitted until 2026-01-27 13:03:08.388102264 +0000 UTC m=+42.962444426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/49dcb295-61bb-47ac-9721-51e5abeacfeb-calico-apiserver-certs") pod "calico-apiserver-675cb5c68f-lgkj9" (UID: "49dcb295-61bb-47ac-9721-51e5abeacfeb") : failed to sync secret cache: timed out waiting for the condition Jan 27 13:03:07.936831 kubelet[2952]: E0127 13:03:07.936754 2952 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 13:03:07.936831 kubelet[2952]: E0127 13:03:07.936847 2952 projected.go:194] Error preparing data for projected volume kube-api-access-dzkh2 for pod calico-apiserver/calico-apiserver-675cb5c68f-6grht: failed to sync configmap cache: timed out waiting for the condition Jan 27 13:03:07.937260 kubelet[2952]: E0127 13:03:07.936987 2952 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31e32b52-76dd-4c4a-b037-c1818999e71b-kube-api-access-dzkh2 podName:31e32b52-76dd-4c4a-b037-c1818999e71b nodeName:}" failed. No retries permitted until 2026-01-27 13:03:08.436960687 +0000 UTC m=+43.011302841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzkh2" (UniqueName: "kubernetes.io/projected/31e32b52-76dd-4c4a-b037-c1818999e71b-kube-api-access-dzkh2") pod "calico-apiserver-675cb5c68f-6grht" (UID: "31e32b52-76dd-4c4a-b037-c1818999e71b") : failed to sync configmap cache: timed out waiting for the condition Jan 27 13:03:07.953561 kubelet[2952]: E0127 13:03:07.953296 2952 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 13:03:07.953561 kubelet[2952]: E0127 13:03:07.953357 2952 projected.go:194] Error preparing data for projected volume kube-api-access-s8xtr for pod calico-apiserver/calico-apiserver-675cb5c68f-lgkj9: failed to sync configmap cache: timed out waiting for the condition Jan 27 13:03:07.953561 kubelet[2952]: E0127 13:03:07.953441 2952 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/49dcb295-61bb-47ac-9721-51e5abeacfeb-kube-api-access-s8xtr podName:49dcb295-61bb-47ac-9721-51e5abeacfeb nodeName:}" failed. No retries permitted until 2026-01-27 13:03:08.453418444 +0000 UTC m=+43.027760606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s8xtr" (UniqueName: "kubernetes.io/projected/49dcb295-61bb-47ac-9721-51e5abeacfeb-kube-api-access-s8xtr") pod "calico-apiserver-675cb5c68f-lgkj9" (UID: "49dcb295-61bb-47ac-9721-51e5abeacfeb") : failed to sync configmap cache: timed out waiting for the condition Jan 27 13:03:08.524790 containerd[1636]: time="2026-01-27T13:03:08.524664661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-6grht,Uid:31e32b52-76dd-4c4a-b037-c1818999e71b,Namespace:calico-apiserver,Attempt:0,}" Jan 27 13:03:08.581115 containerd[1636]: time="2026-01-27T13:03:08.581039346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-lgkj9,Uid:49dcb295-61bb-47ac-9721-51e5abeacfeb,Namespace:calico-apiserver,Attempt:0,}" Jan 27 13:03:08.685010 containerd[1636]: time="2026-01-27T13:03:08.684907364Z" level=error msg="Failed to destroy network for sandbox \"d7398c0a7d72dda9431fc9d4d9260fd9248b010e50aa0793b5f5eac205a7a0fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:08.688992 systemd[1]: run-netns-cni\x2d5d39aa3f\x2d7383\x2d5280\x2d7820\x2d0934284bd23f.mount: Deactivated successfully. Jan 27 13:03:08.691100 containerd[1636]: time="2026-01-27T13:03:08.690898896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-6grht,Uid:31e32b52-76dd-4c4a-b037-c1818999e71b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7398c0a7d72dda9431fc9d4d9260fd9248b010e50aa0793b5f5eac205a7a0fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:08.691568 kubelet[2952]: E0127 13:03:08.691435 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7398c0a7d72dda9431fc9d4d9260fd9248b010e50aa0793b5f5eac205a7a0fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:08.692817 kubelet[2952]: E0127 13:03:08.691985 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7398c0a7d72dda9431fc9d4d9260fd9248b010e50aa0793b5f5eac205a7a0fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" Jan 27 13:03:08.692817 kubelet[2952]: E0127 13:03:08.692173 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7398c0a7d72dda9431fc9d4d9260fd9248b010e50aa0793b5f5eac205a7a0fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" Jan 27 13:03:08.692817 kubelet[2952]: E0127 13:03:08.692253 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-675cb5c68f-6grht_calico-apiserver(31e32b52-76dd-4c4a-b037-c1818999e71b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-675cb5c68f-6grht_calico-apiserver(31e32b52-76dd-4c4a-b037-c1818999e71b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7398c0a7d72dda9431fc9d4d9260fd9248b010e50aa0793b5f5eac205a7a0fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:03:08.740324 containerd[1636]: time="2026-01-27T13:03:08.740247562Z" level=error msg="Failed to destroy network for sandbox \"25dac461d379f8b1ef96886a93627bc72a2bc479d8f25c9ec2b6fb877283e51a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:08.744221 systemd[1]: run-netns-cni\x2da6aba0c4\x2df638\x2d61d8\x2d94b2\x2de11109929257.mount: Deactivated successfully. Jan 27 13:03:08.746925 containerd[1636]: time="2026-01-27T13:03:08.746581638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-lgkj9,Uid:49dcb295-61bb-47ac-9721-51e5abeacfeb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dac461d379f8b1ef96886a93627bc72a2bc479d8f25c9ec2b6fb877283e51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:08.747072 kubelet[2952]: E0127 13:03:08.746895 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dac461d379f8b1ef96886a93627bc72a2bc479d8f25c9ec2b6fb877283e51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:08.747072 kubelet[2952]: E0127 13:03:08.746980 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dac461d379f8b1ef96886a93627bc72a2bc479d8f25c9ec2b6fb877283e51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" Jan 27 13:03:08.747072 kubelet[2952]: E0127 13:03:08.747028 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25dac461d379f8b1ef96886a93627bc72a2bc479d8f25c9ec2b6fb877283e51a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" Jan 27 13:03:08.748444 kubelet[2952]: E0127 13:03:08.747103 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-675cb5c68f-lgkj9_calico-apiserver(49dcb295-61bb-47ac-9721-51e5abeacfeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-675cb5c68f-lgkj9_calico-apiserver(49dcb295-61bb-47ac-9721-51e5abeacfeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25dac461d379f8b1ef96886a93627bc72a2bc479d8f25c9ec2b6fb877283e51a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:03:19.259435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273912051.mount: Deactivated successfully. Jan 27 13:03:19.320090 containerd[1636]: time="2026-01-27T13:03:19.302767817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:03:19.331463 containerd[1636]: time="2026-01-27T13:03:19.329932885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 27 13:03:19.359539 containerd[1636]: time="2026-01-27T13:03:19.359129792Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:03:19.360320 containerd[1636]: time="2026-01-27T13:03:19.360290336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 27 13:03:19.361619 containerd[1636]: time="2026-01-27T13:03:19.361578623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.319937212s" Jan 27 13:03:19.361702 containerd[1636]: time="2026-01-27T13:03:19.361635669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 27 13:03:19.408961 containerd[1636]: time="2026-01-27T13:03:19.408862823Z" level=info msg="CreateContainer within sandbox \"51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 27 13:03:19.504562 containerd[1636]: time="2026-01-27T13:03:19.502777313Z" level=info msg="Container 527d5b6c0dd75e21287043c0186f9a18f3453c523a1e980fb632863cb0b156e8: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:03:19.505894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636662963.mount: Deactivated successfully. Jan 27 13:03:19.586775 containerd[1636]: time="2026-01-27T13:03:19.586612089Z" level=info msg="CreateContainer within sandbox \"51ffb589c13112b90dadb97cab3202ee4e2f38c3346b165ab492c2a2078f3000\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"527d5b6c0dd75e21287043c0186f9a18f3453c523a1e980fb632863cb0b156e8\"" Jan 27 13:03:19.590548 containerd[1636]: time="2026-01-27T13:03:19.590494052Z" level=info msg="StartContainer for \"527d5b6c0dd75e21287043c0186f9a18f3453c523a1e980fb632863cb0b156e8\"" Jan 27 13:03:19.594474 containerd[1636]: time="2026-01-27T13:03:19.594423331Z" level=info msg="connecting to shim 527d5b6c0dd75e21287043c0186f9a18f3453c523a1e980fb632863cb0b156e8" address="unix:///run/containerd/s/638f364455e74538c80f207de30cfba9af9c8bdbbbb328e71ccc1e5cbb5c9384" protocol=ttrpc version=3 Jan 27 13:03:19.735243 containerd[1636]: time="2026-01-27T13:03:19.735172344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp4m,Uid:74f07544-3ec6-434d-a104-1f67b11cb370,Namespace:kube-system,Attempt:0,}" Jan 27 13:03:19.736920 containerd[1636]: time="2026-01-27T13:03:19.736620000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrljx,Uid:37d5fe65-0a05-46a2-aa2f-7ad5352003bd,Namespace:kube-system,Attempt:0,}" Jan 27 13:03:19.737865 systemd[1]: Started cri-containerd-527d5b6c0dd75e21287043c0186f9a18f3453c523a1e980fb632863cb0b156e8.scope - libcontainer container 527d5b6c0dd75e21287043c0186f9a18f3453c523a1e980fb632863cb0b156e8. Jan 27 13:03:19.921000 audit: BPF prog-id=176 op=LOAD Jan 27 13:03:19.927295 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 27 13:03:19.927456 kernel: audit: type=1334 audit(1769518999.921:579): prog-id=176 op=LOAD Jan 27 13:03:19.921000 audit[3906]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3466 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:19.937549 kernel: audit: type=1300 audit(1769518999.921:579): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3466 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:19.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532376435623663306464373565323132383730343363303138366639 Jan 27 13:03:19.946946 kernel: audit: type=1327 audit(1769518999.921:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532376435623663306464373565323132383730343363303138366639 Jan 27 13:03:19.928000 audit: BPF prog-id=177 op=LOAD Jan 27 13:03:19.951543 kernel: audit: type=1334 audit(1769518999.928:580): prog-id=177 op=LOAD Jan 27 13:03:19.928000 audit[3906]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3466 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:19.957556 kernel: audit: type=1300 audit(1769518999.928:580): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3466 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:19.967555 kernel: audit: type=1327 audit(1769518999.928:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532376435623663306464373565323132383730343363303138366639 Jan 27 13:03:19.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532376435623663306464373565323132383730343363303138366639 Jan 27 13:03:19.928000 audit: BPF prog-id=177 op=UNLOAD Jan 27 13:03:19.971600 kernel: audit: type=1334 audit(1769518999.928:581): prog-id=177 op=UNLOAD Jan 27 13:03:19.971677 kernel: audit: type=1300 audit(1769518999.928:581): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:19.928000 audit[3906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:20.000039 kernel: audit: type=1327 audit(1769518999.928:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532376435623663306464373565323132383730343363303138366639 Jan 27 13:03:20.000151 kernel: audit: type=1334 audit(1769518999.928:582): prog-id=176 op=UNLOAD Jan 27 13:03:19.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532376435623663306464373565323132383730343363303138366639 Jan 27 13:03:19.928000 audit: BPF prog-id=176 op=UNLOAD Jan 27 13:03:19.928000 audit[3906]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3466 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:19.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532376435623663306464373565323132383730343363303138366639 Jan 27 13:03:19.928000 audit: BPF prog-id=178 op=LOAD Jan 27 13:03:19.928000 audit[3906]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3466 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:19.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532376435623663306464373565323132383730343363303138366639 Jan 27 13:03:20.017210 containerd[1636]: time="2026-01-27T13:03:19.972117855Z" level=error msg="Failed to destroy network for sandbox \"00170b51d505cb863a3ba23307c9006c7f767b366b3854fb749d3f9da9d3294a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:20.017210 containerd[1636]: time="2026-01-27T13:03:19.974808533Z" level=error msg="Failed to destroy network for sandbox \"c56227b4370e891e63138dd277e08e7fd720115e38d7829ad02b2ff6dd417c11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:20.043109 containerd[1636]: time="2026-01-27T13:03:20.042943797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp4m,Uid:74f07544-3ec6-434d-a104-1f67b11cb370,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00170b51d505cb863a3ba23307c9006c7f767b366b3854fb749d3f9da9d3294a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:20.044892 containerd[1636]: time="2026-01-27T13:03:20.044807086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrljx,Uid:37d5fe65-0a05-46a2-aa2f-7ad5352003bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c56227b4370e891e63138dd277e08e7fd720115e38d7829ad02b2ff6dd417c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:20.045532 kubelet[2952]: E0127 13:03:20.045158 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c56227b4370e891e63138dd277e08e7fd720115e38d7829ad02b2ff6dd417c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:20.045532 kubelet[2952]: E0127 13:03:20.045246 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c56227b4370e891e63138dd277e08e7fd720115e38d7829ad02b2ff6dd417c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vrljx" Jan 27 13:03:20.045532 kubelet[2952]: E0127 13:03:20.045281 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c56227b4370e891e63138dd277e08e7fd720115e38d7829ad02b2ff6dd417c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vrljx" Jan 27 13:03:20.047075 kubelet[2952]: E0127 13:03:20.047028 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vrljx_kube-system(37d5fe65-0a05-46a2-aa2f-7ad5352003bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vrljx_kube-system(37d5fe65-0a05-46a2-aa2f-7ad5352003bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c56227b4370e891e63138dd277e08e7fd720115e38d7829ad02b2ff6dd417c11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vrljx" podUID="37d5fe65-0a05-46a2-aa2f-7ad5352003bd" Jan 27 13:03:20.050677 kubelet[2952]: E0127 13:03:20.047807 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00170b51d505cb863a3ba23307c9006c7f767b366b3854fb749d3f9da9d3294a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:20.050677 kubelet[2952]: E0127 13:03:20.050499 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00170b51d505cb863a3ba23307c9006c7f767b366b3854fb749d3f9da9d3294a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbp4m" Jan 27 13:03:20.050677 kubelet[2952]: E0127 13:03:20.050557 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00170b51d505cb863a3ba23307c9006c7f767b366b3854fb749d3f9da9d3294a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cbp4m" Jan 27 13:03:20.050867 kubelet[2952]: E0127 13:03:20.050613 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cbp4m_kube-system(74f07544-3ec6-434d-a104-1f67b11cb370)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cbp4m_kube-system(74f07544-3ec6-434d-a104-1f67b11cb370)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00170b51d505cb863a3ba23307c9006c7f767b366b3854fb749d3f9da9d3294a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cbp4m" podUID="74f07544-3ec6-434d-a104-1f67b11cb370" Jan 27 13:03:20.051410 containerd[1636]: time="2026-01-27T13:03:20.051360653Z" level=info msg="StartContainer for \"527d5b6c0dd75e21287043c0186f9a18f3453c523a1e980fb632863cb0b156e8\" returns successfully" Jan 27 13:03:20.151997 kubelet[2952]: I0127 13:03:20.151292 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2wzqg" podStartSLOduration=2.313244911 podStartE2EDuration="30.151265211s" podCreationTimestamp="2026-01-27 13:02:50 +0000 UTC" firstStartedPulling="2026-01-27 13:02:51.525836337 +0000 UTC m=+26.100178521" lastFinishedPulling="2026-01-27 13:03:19.363856664 +0000 UTC m=+53.938198821" observedRunningTime="2026-01-27 13:03:20.146653641 +0000 UTC m=+54.720995839" watchObservedRunningTime="2026-01-27 13:03:20.151265211 +0000 UTC m=+54.725607412" Jan 27 13:03:20.609113 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 27 13:03:20.609700 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 27 13:03:20.709543 containerd[1636]: time="2026-01-27T13:03:20.709216675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gghgq,Uid:8a652343-1e00-4d74-90a4-253edca0200b,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:20.713688 containerd[1636]: time="2026-01-27T13:03:20.711867273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-6grht,Uid:31e32b52-76dd-4c4a-b037-c1818999e71b,Namespace:calico-apiserver,Attempt:0,}" Jan 27 13:03:20.713688 containerd[1636]: time="2026-01-27T13:03:20.712112319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cdccb5f6-wx4tb,Uid:d20e3435-24a0-4d45-b1d0-2db2610f07b9,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:20.713688 containerd[1636]: time="2026-01-27T13:03:20.712182434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd56df978-nczgt,Uid:8569123c-abee-43a6-aac4-12a05912eeb0,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:20.713688 containerd[1636]: time="2026-01-27T13:03:20.712573990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kpqqc,Uid:edbc19c4-c5a2-4875-9a7b-5c829dca568c,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:21.719698 containerd[1636]: time="2026-01-27T13:03:21.719635840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-lgkj9,Uid:49dcb295-61bb-47ac-9721-51e5abeacfeb,Namespace:calico-apiserver,Attempt:0,}" Jan 27 13:03:21.940921 systemd-networkd[1541]: cali89d09fe8299: Link UP Jan 27 13:03:21.941283 systemd-networkd[1541]: cali89d09fe8299: Gained carrier Jan 27 13:03:21.984338 containerd[1636]: 2026-01-27 13:03:21.072 [INFO][4035] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 13:03:21.984338 containerd[1636]: 2026-01-27 13:03:21.263 [INFO][4035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0 csi-node-driver- calico-system 8a652343-1e00-4d74-90a4-253edca0200b 691 0 2026-01-27 13:02:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com csi-node-driver-gghgq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali89d09fe8299 [] [] }} ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Namespace="calico-system" Pod="csi-node-driver-gghgq" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-" Jan 27 13:03:21.984338 containerd[1636]: 2026-01-27 13:03:21.264 [INFO][4035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Namespace="calico-system" Pod="csi-node-driver-gghgq" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" Jan 27 13:03:21.984338 containerd[1636]: 2026-01-27 13:03:21.698 [INFO][4134] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" HandleID="k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Workload="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.704 [INFO][4134] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" HandleID="k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Workload="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c230), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"csi-node-driver-gghgq", "timestamp":"2026-01-27 13:03:21.698717131 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.704 [INFO][4134] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.704 [INFO][4134] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.710 [INFO][4134] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.751 [INFO][4134] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.779 [INFO][4134] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.808 [INFO][4134] ipam/ipam.go 543: Ran out of existing affine blocks for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.813 [INFO][4134] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.825 [INFO][4134] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.64.64/26 Jan 27 13:03:21.984847 containerd[1636]: 2026-01-27 13:03:21.825 [INFO][4134] ipam/ipam.go 572: Found unclaimed block host="srv-4nwk8.gb1.brightbox.com" subnet=192.168.64.64/26 Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.826 [INFO][4134] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="srv-4nwk8.gb1.brightbox.com" subnet=192.168.64.64/26 Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.837 [INFO][4134] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="srv-4nwk8.gb1.brightbox.com" subnet=192.168.64.64/26 Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.837 [INFO][4134] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.840 [INFO][4134] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.845 [INFO][4134] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.849 [INFO][4134] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.849 [INFO][4134] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-4nwk8.gb1.brightbox.com" subnet=192.168.64.64/26 Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.858 [INFO][4134] ipam/ipam_block_reader_writer.go 267: Successfully created block Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.858 [INFO][4134] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-4nwk8.gb1.brightbox.com" subnet=192.168.64.64/26 Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.864 [INFO][4134] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="srv-4nwk8.gb1.brightbox.com" subnet=192.168.64.64/26 Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.864 [INFO][4134] ipam/ipam.go 607: Block '192.168.64.64/26' has 64 free ips which is more than 1 ips required. host="srv-4nwk8.gb1.brightbox.com" subnet=192.168.64.64/26 Jan 27 13:03:21.987365 containerd[1636]: 2026-01-27 13:03:21.864 [INFO][4134] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.990974 containerd[1636]: 2026-01-27 13:03:21.868 [INFO][4134] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030 Jan 27 13:03:21.990974 containerd[1636]: 2026-01-27 13:03:21.881 [INFO][4134] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.990974 containerd[1636]: 2026-01-27 13:03:21.895 [INFO][4134] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.64/26] block=192.168.64.64/26 handle="k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.990974 containerd[1636]: 2026-01-27 13:03:21.895 [INFO][4134] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.64/26] handle="k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:21.990974 containerd[1636]: 2026-01-27 13:03:21.895 [INFO][4134] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:21.990974 containerd[1636]: 2026-01-27 13:03:21.896 [INFO][4134] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.64/26] IPv6=[] ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" HandleID="k8s-pod-network.4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Workload="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" Jan 27 13:03:21.991281 containerd[1636]: 2026-01-27 13:03:21.908 [INFO][4035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Namespace="calico-system" Pod="csi-node-driver-gghgq" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a652343-1e00-4d74-90a4-253edca0200b", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-gghgq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89d09fe8299", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:21.992696 containerd[1636]: 2026-01-27 13:03:21.908 [INFO][4035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.64/32] ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Namespace="calico-system" Pod="csi-node-driver-gghgq" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" Jan 27 13:03:21.992696 containerd[1636]: 2026-01-27 13:03:21.908 [INFO][4035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89d09fe8299 ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Namespace="calico-system" Pod="csi-node-driver-gghgq" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" Jan 27 13:03:21.992696 containerd[1636]: 2026-01-27 13:03:21.935 [INFO][4035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Namespace="calico-system" Pod="csi-node-driver-gghgq" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" Jan 27 13:03:21.996797 containerd[1636]: 2026-01-27 13:03:21.937 [INFO][4035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Namespace="calico-system" Pod="csi-node-driver-gghgq" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a652343-1e00-4d74-90a4-253edca0200b", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030", Pod:"csi-node-driver-gghgq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali89d09fe8299", MAC:"8a:7c:93:af:eb:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:21.996904 containerd[1636]: 2026-01-27 13:03:21.971 [INFO][4035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" Namespace="calico-system" Pod="csi-node-driver-gghgq" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-csi--node--driver--gghgq-eth0" Jan 27 13:03:22.080122 systemd-networkd[1541]: cali8b4e03cef37: Link UP Jan 27 13:03:22.082275 systemd-networkd[1541]: cali8b4e03cef37: Gained carrier Jan 27 13:03:22.132937 containerd[1636]: 2026-01-27 13:03:21.063 [INFO][4041] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 13:03:22.132937 containerd[1636]: 2026-01-27 13:03:21.273 [INFO][4041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0 whisker-5dd56df978- calico-system 8569123c-abee-43a6-aac4-12a05912eeb0 878 0 2026-01-27 13:02:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dd56df978 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com whisker-5dd56df978-nczgt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8b4e03cef37 [] [] }} ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Namespace="calico-system" Pod="whisker-5dd56df978-nczgt" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-" Jan 27 13:03:22.132937 containerd[1636]: 2026-01-27 13:03:21.276 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Namespace="calico-system" Pod="whisker-5dd56df978-nczgt" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:22.132937 containerd[1636]: 2026-01-27 13:03:21.705 [INFO][4135] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:21.709 [INFO][4135] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000371920), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"whisker-5dd56df978-nczgt", "timestamp":"2026-01-27 13:03:21.705853938 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:21.709 [INFO][4135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:21.896 [INFO][4135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:21.896 [INFO][4135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:21.951 [INFO][4135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:21.982 [INFO][4135] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:22.012 [INFO][4135] ipam/ipam.go 511: Trying affinity for 192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:22.019 [INFO][4135] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134441 containerd[1636]: 2026-01-27 13:03:22.025 [INFO][4135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134871 containerd[1636]: 2026-01-27 13:03:22.025 [INFO][4135] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134871 containerd[1636]: 2026-01-27 13:03:22.033 [INFO][4135] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90 Jan 27 13:03:22.134871 containerd[1636]: 2026-01-27 13:03:22.042 [INFO][4135] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134871 containerd[1636]: 2026-01-27 13:03:22.053 [INFO][4135] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.65/26] block=192.168.64.64/26 handle="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134871 containerd[1636]: 2026-01-27 13:03:22.053 [INFO][4135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.65/26] handle="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.134871 containerd[1636]: 2026-01-27 13:03:22.053 [INFO][4135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:22.134871 containerd[1636]: 2026-01-27 13:03:22.053 [INFO][4135] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.65/26] IPv6=[] ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:22.137482 containerd[1636]: 2026-01-27 13:03:22.069 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Namespace="calico-system" Pod="whisker-5dd56df978-nczgt" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0", GenerateName:"whisker-5dd56df978-", Namespace:"calico-system", SelfLink:"", UID:"8569123c-abee-43a6-aac4-12a05912eeb0", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dd56df978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"whisker-5dd56df978-nczgt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8b4e03cef37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:22.137482 containerd[1636]: 2026-01-27 13:03:22.069 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.65/32] ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Namespace="calico-system" Pod="whisker-5dd56df978-nczgt" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:22.137799 containerd[1636]: 2026-01-27 13:03:22.069 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b4e03cef37 ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Namespace="calico-system" Pod="whisker-5dd56df978-nczgt" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:22.137799 containerd[1636]: 2026-01-27 13:03:22.083 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Namespace="calico-system" Pod="whisker-5dd56df978-nczgt" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:22.138206 containerd[1636]: 2026-01-27 13:03:22.088 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Namespace="calico-system" Pod="whisker-5dd56df978-nczgt" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0", GenerateName:"whisker-5dd56df978-", Namespace:"calico-system", SelfLink:"", UID:"8569123c-abee-43a6-aac4-12a05912eeb0", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dd56df978", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90", Pod:"whisker-5dd56df978-nczgt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8b4e03cef37", MAC:"aa:b1:d2:6e:23:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:22.138307 containerd[1636]: 2026-01-27 13:03:22.125 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Namespace="calico-system" Pod="whisker-5dd56df978-nczgt" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:22.223758 systemd-networkd[1541]: cali0257fd6ed27: Link UP Jan 27 13:03:22.229960 systemd-networkd[1541]: cali0257fd6ed27: Gained carrier Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:21.311 [INFO][4094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:21.312 [INFO][4094] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" iface="eth0" netns="/var/run/netns/cni-b46fdc66-2b84-d7e0-915b-f969f7e0880a" Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:21.314 [INFO][4094] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" iface="eth0" netns="/var/run/netns/cni-b46fdc66-2b84-d7e0-915b-f969f7e0880a" Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:21.316 [INFO][4094] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" iface="eth0" netns="/var/run/netns/cni-b46fdc66-2b84-d7e0-915b-f969f7e0880a" Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:21.316 [INFO][4094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:21.316 [INFO][4094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:21.704 [INFO][4130] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" HandleID="k8s-pod-network.e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:21.722 [INFO][4130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:22.256855 containerd[1636]: 2026-01-27 13:03:22.179 [INFO][4130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:22.257411 containerd[1636]: 2026-01-27 13:03:22.202 [WARNING][4130] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" HandleID="k8s-pod-network.e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:22.257411 containerd[1636]: 2026-01-27 13:03:22.202 [INFO][4130] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" HandleID="k8s-pod-network.e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:22.257411 containerd[1636]: 2026-01-27 13:03:22.215 [INFO][4130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:22.257411 containerd[1636]: 2026-01-27 13:03:22.229 [INFO][4094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3" Jan 27 13:03:22.263707 containerd[1636]: time="2026-01-27T13:03:22.261601481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-6grht,Uid:31e32b52-76dd-4c4a-b037-c1818999e71b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:22.265885 systemd[1]: run-netns-cni\x2db46fdc66\x2d2b84\x2dd7e0\x2d915b\x2df969f7e0880a.mount: Deactivated successfully. Jan 27 13:03:22.304106 containerd[1636]: 2026-01-27 13:03:21.063 [INFO][4055] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 13:03:22.304106 containerd[1636]: 2026-01-27 13:03:21.271 [INFO][4055] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0 goldmane-666569f655- calico-system edbc19c4-c5a2-4875-9a7b-5c829dca568c 820 0 2026-01-27 13:02:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com goldmane-666569f655-kpqqc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0257fd6ed27 [] [] }} ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Namespace="calico-system" Pod="goldmane-666569f655-kpqqc" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-" Jan 27 13:03:22.304106 containerd[1636]: 2026-01-27 13:03:21.275 [INFO][4055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Namespace="calico-system" Pod="goldmane-666569f655-kpqqc" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" Jan 27 13:03:22.304106 containerd[1636]: 2026-01-27 13:03:21.698 [INFO][4136] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" HandleID="k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Workload="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:21.709 [INFO][4136] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" HandleID="k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Workload="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ca420), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"goldmane-666569f655-kpqqc", "timestamp":"2026-01-27 13:03:21.698600619 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:21.709 [INFO][4136] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:22.054 [INFO][4136] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:22.054 [INFO][4136] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:22.082 [INFO][4136] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:22.100 [INFO][4136] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:22.130 [INFO][4136] ipam/ipam.go 511: Trying affinity for 192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:22.139 [INFO][4136] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.305189 containerd[1636]: 2026-01-27 13:03:22.147 [INFO][4136] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.307292 containerd[1636]: 2026-01-27 13:03:22.147 [INFO][4136] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.307292 containerd[1636]: 2026-01-27 13:03:22.152 [INFO][4136] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e Jan 27 13:03:22.307292 containerd[1636]: 2026-01-27 13:03:22.163 [INFO][4136] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.307292 containerd[1636]: 2026-01-27 13:03:22.176 [INFO][4136] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.66/26] block=192.168.64.64/26 handle="k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.307292 containerd[1636]: 2026-01-27 13:03:22.176 [INFO][4136] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.66/26] handle="k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.307292 containerd[1636]: 2026-01-27 13:03:22.176 [INFO][4136] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:22.307292 containerd[1636]: 2026-01-27 13:03:22.176 [INFO][4136] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.66/26] IPv6=[] ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" HandleID="k8s-pod-network.c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Workload="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" Jan 27 13:03:22.307642 containerd[1636]: 2026-01-27 13:03:22.190 [INFO][4055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Namespace="calico-system" Pod="goldmane-666569f655-kpqqc" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"edbc19c4-c5a2-4875-9a7b-5c829dca568c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-kpqqc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0257fd6ed27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:22.307768 containerd[1636]: 2026-01-27 13:03:22.191 [INFO][4055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.66/32] ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Namespace="calico-system" Pod="goldmane-666569f655-kpqqc" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" Jan 27 13:03:22.307768 containerd[1636]: 2026-01-27 13:03:22.191 [INFO][4055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0257fd6ed27 ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Namespace="calico-system" Pod="goldmane-666569f655-kpqqc" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" Jan 27 13:03:22.307768 containerd[1636]: 2026-01-27 13:03:22.244 [INFO][4055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Namespace="calico-system" Pod="goldmane-666569f655-kpqqc" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" Jan 27 13:03:22.308017 containerd[1636]: 2026-01-27 13:03:22.250 [INFO][4055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Namespace="calico-system" Pod="goldmane-666569f655-kpqqc" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"edbc19c4-c5a2-4875-9a7b-5c829dca568c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e", Pod:"goldmane-666569f655-kpqqc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0257fd6ed27", MAC:"aa:2f:47:af:b4:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:22.308484 containerd[1636]: 2026-01-27 13:03:22.281 [INFO][4055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" Namespace="calico-system" Pod="goldmane-666569f655-kpqqc" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-goldmane--666569f655--kpqqc-eth0" Jan 27 13:03:22.316353 kubelet[2952]: E0127 13:03:22.315932 2952 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 27 13:03:22.338303 kubelet[2952]: E0127 13:03:22.338186 2952 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" Jan 27 13:03:22.338303 kubelet[2952]: E0127 13:03:22.338265 2952 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" Jan 27 13:03:22.343474 kubelet[2952]: E0127 13:03:22.342544 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-675cb5c68f-6grht_calico-apiserver(31e32b52-76dd-4c4a-b037-c1818999e71b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-675cb5c68f-6grht_calico-apiserver(31e32b52-76dd-4c4a-b037-c1818999e71b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e25192adddcc9b6ba6960952c5c18ad401e9b935fc3e5f9319f1235073f89cc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:03:22.453722 containerd[1636]: time="2026-01-27T13:03:22.449379464Z" level=info msg="connecting to shim 4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030" address="unix:///run/containerd/s/c303360bf5102e383fdca2bb0d05ebea0aca027df970013adcbc62e7e5e499be" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:22.458765 containerd[1636]: time="2026-01-27T13:03:22.458697536Z" level=info msg="connecting to shim 9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" address="unix:///run/containerd/s/e4f902d6131fd3474ac8eeaf6624d31d6e37042c7304f1d3a893b1e51df8ee8d" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:22.480305 containerd[1636]: time="2026-01-27T13:03:22.480204932Z" level=info msg="connecting to shim c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e" address="unix:///run/containerd/s/f8340390c8c622a952d6263296cf1687bf2f2329539cf70aec58661533996b9c" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:22.535703 systemd-networkd[1541]: cali6178fe79854: Link UP Jan 27 13:03:22.538686 systemd-networkd[1541]: cali6178fe79854: Gained carrier Jan 27 13:03:22.618760 containerd[1636]: 2026-01-27 13:03:21.052 [INFO][4031] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 13:03:22.618760 containerd[1636]: 2026-01-27 13:03:21.271 [INFO][4031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0 calico-kube-controllers-85cdccb5f6- calico-system d20e3435-24a0-4d45-b1d0-2db2610f07b9 823 0 2026-01-27 13:02:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85cdccb5f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com calico-kube-controllers-85cdccb5f6-wx4tb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6178fe79854 [] [] }} ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Namespace="calico-system" Pod="calico-kube-controllers-85cdccb5f6-wx4tb" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-" Jan 27 13:03:22.618760 containerd[1636]: 2026-01-27 13:03:21.275 [INFO][4031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Namespace="calico-system" Pod="calico-kube-controllers-85cdccb5f6-wx4tb" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" Jan 27 13:03:22.618760 containerd[1636]: 2026-01-27 13:03:21.710 [INFO][4138] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" HandleID="k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:21.711 [INFO][4138] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" HandleID="k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c8150), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"calico-kube-controllers-85cdccb5f6-wx4tb", "timestamp":"2026-01-27 13:03:21.710820801 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:21.731 [INFO][4138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:22.214 [INFO][4138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:22.215 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:22.277 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:22.320 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:22.350 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:22.355 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.620237 containerd[1636]: 2026-01-27 13:03:22.363 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.621895 containerd[1636]: 2026-01-27 13:03:22.367 [INFO][4138] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.621895 containerd[1636]: 2026-01-27 13:03:22.384 [INFO][4138] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14 Jan 27 13:03:22.621895 containerd[1636]: 2026-01-27 13:03:22.426 [INFO][4138] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.621895 containerd[1636]: 2026-01-27 13:03:22.469 [INFO][4138] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.67/26] block=192.168.64.64/26 handle="k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.621895 containerd[1636]: 2026-01-27 13:03:22.473 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.67/26] handle="k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.621895 containerd[1636]: 2026-01-27 13:03:22.476 [INFO][4138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:22.621895 containerd[1636]: 2026-01-27 13:03:22.478 [INFO][4138] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.67/26] IPv6=[] ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" HandleID="k8s-pod-network.854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" Jan 27 13:03:22.622775 containerd[1636]: 2026-01-27 13:03:22.497 [INFO][4031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Namespace="calico-system" Pod="calico-kube-controllers-85cdccb5f6-wx4tb" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0", GenerateName:"calico-kube-controllers-85cdccb5f6-", Namespace:"calico-system", SelfLink:"", UID:"d20e3435-24a0-4d45-b1d0-2db2610f07b9", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cdccb5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-85cdccb5f6-wx4tb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6178fe79854", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:22.623345 containerd[1636]: 2026-01-27 13:03:22.497 [INFO][4031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.67/32] ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Namespace="calico-system" Pod="calico-kube-controllers-85cdccb5f6-wx4tb" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" Jan 27 13:03:22.623345 containerd[1636]: 2026-01-27 13:03:22.502 [INFO][4031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6178fe79854 ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Namespace="calico-system" Pod="calico-kube-controllers-85cdccb5f6-wx4tb" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" Jan 27 13:03:22.623345 containerd[1636]: 2026-01-27 13:03:22.539 [INFO][4031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Namespace="calico-system" Pod="calico-kube-controllers-85cdccb5f6-wx4tb" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" Jan 27 13:03:22.624100 containerd[1636]: 2026-01-27 13:03:22.540 [INFO][4031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Namespace="calico-system" Pod="calico-kube-controllers-85cdccb5f6-wx4tb" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0", GenerateName:"calico-kube-controllers-85cdccb5f6-", Namespace:"calico-system", SelfLink:"", UID:"d20e3435-24a0-4d45-b1d0-2db2610f07b9", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cdccb5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14", Pod:"calico-kube-controllers-85cdccb5f6-wx4tb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6178fe79854", MAC:"62:d7:69:3a:78:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:22.624636 containerd[1636]: 2026-01-27 13:03:22.606 [INFO][4031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" Namespace="calico-system" Pod="calico-kube-controllers-85cdccb5f6-wx4tb" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--kube--controllers--85cdccb5f6--wx4tb-eth0" Jan 27 13:03:22.652231 systemd[1]: Started cri-containerd-4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030.scope - libcontainer container 4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030. Jan 27 13:03:22.661216 systemd[1]: Started cri-containerd-9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90.scope - libcontainer container 9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90. Jan 27 13:03:22.699469 systemd[1]: Started cri-containerd-c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e.scope - libcontainer container c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e. Jan 27 13:03:22.739824 systemd-networkd[1541]: calia804a566670: Link UP Jan 27 13:03:22.743597 systemd-networkd[1541]: calia804a566670: Gained carrier Jan 27 13:03:22.779688 containerd[1636]: time="2026-01-27T13:03:22.779582646Z" level=info msg="connecting to shim 854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14" address="unix:///run/containerd/s/915e74eff5f490cdece697e877195d488162a82372b1f30aab07297fdcdbea41" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:22.812840 containerd[1636]: 2026-01-27 13:03:21.850 [INFO][4175] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 13:03:22.812840 containerd[1636]: 2026-01-27 13:03:21.874 [INFO][4175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0 calico-apiserver-675cb5c68f- calico-apiserver 49dcb295-61bb-47ac-9721-51e5abeacfeb 821 0 2026-01-27 13:02:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:675cb5c68f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com calico-apiserver-675cb5c68f-lgkj9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia804a566670 [] [] }} ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-lgkj9" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-" Jan 27 13:03:22.812840 containerd[1636]: 2026-01-27 13:03:21.874 [INFO][4175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-lgkj9" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" Jan 27 13:03:22.812840 containerd[1636]: 2026-01-27 13:03:22.030 [INFO][4188] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" HandleID="k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.030 [INFO][4188] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" HandleID="k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"calico-apiserver-675cb5c68f-lgkj9", "timestamp":"2026-01-27 13:03:22.030456959 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.031 [INFO][4188] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.474 [INFO][4188] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.474 [INFO][4188] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.515 [INFO][4188] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.550 [INFO][4188] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.592 [INFO][4188] ipam/ipam.go 511: Trying affinity for 192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.609 [INFO][4188] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816097 containerd[1636]: 2026-01-27 13:03:22.628 [INFO][4188] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816569 containerd[1636]: 2026-01-27 13:03:22.630 [INFO][4188] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816569 containerd[1636]: 2026-01-27 13:03:22.635 [INFO][4188] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae Jan 27 13:03:22.816569 containerd[1636]: 2026-01-27 13:03:22.664 [INFO][4188] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816569 containerd[1636]: 2026-01-27 13:03:22.696 [INFO][4188] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.69/26] block=192.168.64.64/26 handle="k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816569 containerd[1636]: 2026-01-27 13:03:22.696 [INFO][4188] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.69/26] handle="k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:22.816569 containerd[1636]: 2026-01-27 13:03:22.696 [INFO][4188] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:22.816569 containerd[1636]: 2026-01-27 13:03:22.696 [INFO][4188] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.69/26] IPv6=[] ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" HandleID="k8s-pod-network.d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" Jan 27 13:03:22.817682 containerd[1636]: 2026-01-27 13:03:22.709 [INFO][4175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-lgkj9" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0", GenerateName:"calico-apiserver-675cb5c68f-", Namespace:"calico-apiserver", SelfLink:"", UID:"49dcb295-61bb-47ac-9721-51e5abeacfeb", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"675cb5c68f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-675cb5c68f-lgkj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia804a566670", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:22.818259 containerd[1636]: 2026-01-27 13:03:22.710 [INFO][4175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.69/32] ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-lgkj9" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" Jan 27 13:03:22.818259 containerd[1636]: 2026-01-27 13:03:22.710 [INFO][4175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia804a566670 ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-lgkj9" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" Jan 27 13:03:22.818259 containerd[1636]: 2026-01-27 13:03:22.744 [INFO][4175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-lgkj9" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" Jan 27 13:03:22.818813 containerd[1636]: 2026-01-27 13:03:22.749 [INFO][4175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-lgkj9" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0", GenerateName:"calico-apiserver-675cb5c68f-", Namespace:"calico-apiserver", SelfLink:"", UID:"49dcb295-61bb-47ac-9721-51e5abeacfeb", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"675cb5c68f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae", Pod:"calico-apiserver-675cb5c68f-lgkj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia804a566670", MAC:"a6:3b:69:59:21:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:22.818936 containerd[1636]: 2026-01-27 13:03:22.802 [INFO][4175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-lgkj9" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--lgkj9-eth0" Jan 27 13:03:22.851000 audit: BPF prog-id=179 op=LOAD Jan 27 13:03:22.855000 audit: BPF prog-id=180 op=LOAD Jan 27 13:03:22.856000 audit: BPF prog-id=181 op=LOAD Jan 27 13:03:22.856000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4247 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613864616664323639663264373734393462633234393762333663 Jan 27 13:03:22.856000 audit: BPF prog-id=181 op=UNLOAD Jan 27 13:03:22.856000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4247 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613864616664323639663264373734393462633234393762333663 Jan 27 13:03:22.857000 audit: BPF prog-id=182 op=LOAD Jan 27 13:03:22.857000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4247 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613864616664323639663264373734393462633234393762333663 Jan 27 13:03:22.857000 audit: BPF prog-id=183 op=LOAD Jan 27 13:03:22.857000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4247 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613864616664323639663264373734393462633234393762333663 Jan 27 13:03:22.858000 audit: BPF prog-id=183 op=UNLOAD Jan 27 13:03:22.858000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4247 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613864616664323639663264373734393462633234393762333663 Jan 27 13:03:22.858000 audit: BPF prog-id=182 op=UNLOAD Jan 27 13:03:22.858000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4247 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613864616664323639663264373734393462633234393762333663 Jan 27 13:03:22.858000 audit: BPF prog-id=184 op=LOAD Jan 27 13:03:22.858000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4247 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965613864616664323639663264373734393462633234393762333663 Jan 27 13:03:22.861000 audit: BPF prog-id=185 op=LOAD Jan 27 13:03:22.861000 audit[4291]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4242 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613236323634373439396436636536336237333361386532326563 Jan 27 13:03:22.861000 audit: BPF prog-id=185 op=UNLOAD Jan 27 13:03:22.861000 audit[4291]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613236323634373439396436636536336237333361386532326563 Jan 27 13:03:22.863000 audit: BPF prog-id=186 op=LOAD Jan 27 13:03:22.863000 audit[4291]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4242 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613236323634373439396436636536336237333361386532326563 Jan 27 13:03:22.863000 audit: BPF prog-id=187 op=LOAD Jan 27 13:03:22.863000 audit[4291]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4242 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613236323634373439396436636536336237333361386532326563 Jan 27 13:03:22.863000 audit: BPF prog-id=187 op=UNLOAD Jan 27 13:03:22.863000 audit[4291]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613236323634373439396436636536336237333361386532326563 Jan 27 13:03:22.863000 audit: BPF prog-id=186 op=UNLOAD Jan 27 13:03:22.863000 audit[4291]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613236323634373439396436636536336237333361386532326563 Jan 27 13:03:22.863000 audit: BPF prog-id=188 op=LOAD Jan 27 13:03:22.863000 audit[4291]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4242 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613236323634373439396436636536336237333361386532326563 Jan 27 13:03:22.871000 audit: BPF prog-id=189 op=LOAD Jan 27 13:03:22.877000 audit: BPF prog-id=190 op=LOAD Jan 27 13:03:22.877000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4254 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326262383333313835376537353333643432346463363635643934 Jan 27 13:03:22.882000 audit: BPF prog-id=190 op=UNLOAD Jan 27 13:03:22.882000 audit[4300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4254 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326262383333313835376537353333643432346463363635643934 Jan 27 13:03:22.882000 audit: BPF prog-id=191 op=LOAD Jan 27 13:03:22.882000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4254 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326262383333313835376537353333643432346463363635643934 Jan 27 13:03:22.883000 audit: BPF prog-id=192 op=LOAD Jan 27 13:03:22.883000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4254 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326262383333313835376537353333643432346463363635643934 Jan 27 13:03:22.883000 audit: BPF prog-id=192 op=UNLOAD Jan 27 13:03:22.883000 audit[4300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4254 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326262383333313835376537353333643432346463363635643934 Jan 27 13:03:22.883000 audit: BPF prog-id=191 op=UNLOAD Jan 27 13:03:22.883000 audit[4300]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4254 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326262383333313835376537353333643432346463363635643934 Jan 27 13:03:22.883000 audit: BPF prog-id=193 op=LOAD Jan 27 13:03:22.883000 audit[4300]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4254 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:22.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326262383333313835376537353333643432346463363635643934 Jan 27 13:03:22.937161 systemd[1]: Started cri-containerd-854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14.scope - libcontainer container 854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14. Jan 27 13:03:22.948136 containerd[1636]: time="2026-01-27T13:03:22.948070215Z" level=info msg="connecting to shim d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae" address="unix:///run/containerd/s/08fb27245f6a710055e5059d5dbfbd5ca2651c9d3e503f866f47ab504960834d" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:23.030185 containerd[1636]: time="2026-01-27T13:03:23.030114416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gghgq,Uid:8a652343-1e00-4d74-90a4-253edca0200b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ca262647499d6ce63b733a8e22ec95f9c11cd14143d7254f1711277178fb030\"" Jan 27 13:03:23.044000 audit: BPF prog-id=194 op=LOAD Jan 27 13:03:23.047000 audit: BPF prog-id=195 op=LOAD Jan 27 13:03:23.047000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4367 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346334653135636131623031366237616639316330333465623534 Jan 27 13:03:23.048000 audit: BPF prog-id=195 op=UNLOAD Jan 27 13:03:23.048000 audit[4386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4367 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346334653135636131623031366237616639316330333465623534 Jan 27 13:03:23.048000 audit: BPF prog-id=196 op=LOAD Jan 27 13:03:23.048000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4367 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346334653135636131623031366237616639316330333465623534 Jan 27 13:03:23.049000 audit: BPF prog-id=197 op=LOAD Jan 27 13:03:23.049000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4367 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346334653135636131623031366237616639316330333465623534 Jan 27 13:03:23.050000 audit: BPF prog-id=197 op=UNLOAD Jan 27 13:03:23.050000 audit[4386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4367 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346334653135636131623031366237616639316330333465623534 Jan 27 13:03:23.050000 audit: BPF prog-id=196 op=UNLOAD Jan 27 13:03:23.050000 audit[4386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4367 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346334653135636131623031366237616639316330333465623534 Jan 27 13:03:23.050000 audit: BPF prog-id=198 op=LOAD Jan 27 13:03:23.050000 audit[4386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4367 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835346334653135636131623031366237616639316330333465623534 Jan 27 13:03:23.061399 containerd[1636]: time="2026-01-27T13:03:23.061301119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 13:03:23.124173 systemd[1]: Started cri-containerd-d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae.scope - libcontainer container d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae. Jan 27 13:03:23.126666 containerd[1636]: time="2026-01-27T13:03:23.126456479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd56df978-nczgt,Uid:8569123c-abee-43a6-aac4-12a05912eeb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\"" Jan 27 13:03:23.178102 containerd[1636]: time="2026-01-27T13:03:23.177419768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kpqqc,Uid:edbc19c4-c5a2-4875-9a7b-5c829dca568c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c42bb8331857e7533d424dc665d94033cf79621567c27a041367fe540f73957e\"" Jan 27 13:03:23.188610 containerd[1636]: time="2026-01-27T13:03:23.188288208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-6grht,Uid:31e32b52-76dd-4c4a-b037-c1818999e71b,Namespace:calico-apiserver,Attempt:0,}" Jan 27 13:03:23.194000 audit: BPF prog-id=199 op=LOAD Jan 27 13:03:23.197000 audit: BPF prog-id=200 op=LOAD Jan 27 13:03:23.197000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4412 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343365306233386364626231346232663038663765336337326663 Jan 27 13:03:23.197000 audit: BPF prog-id=200 op=UNLOAD Jan 27 13:03:23.197000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4412 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343365306233386364626231346232663038663765336337326663 Jan 27 13:03:23.198000 audit: BPF prog-id=201 op=LOAD Jan 27 13:03:23.198000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4412 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343365306233386364626231346232663038663765336337326663 Jan 27 13:03:23.198000 audit: BPF prog-id=202 op=LOAD Jan 27 13:03:23.198000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4412 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343365306233386364626231346232663038663765336337326663 Jan 27 13:03:23.199000 audit: BPF prog-id=202 op=UNLOAD Jan 27 13:03:23.199000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4412 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343365306233386364626231346232663038663765336337326663 Jan 27 13:03:23.201000 audit: BPF prog-id=201 op=UNLOAD Jan 27 13:03:23.201000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4412 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343365306233386364626231346232663038663765336337326663 Jan 27 13:03:23.201000 audit: BPF prog-id=203 op=LOAD Jan 27 13:03:23.201000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4412 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431343365306233386364626231346232663038663765336337326663 Jan 27 13:03:23.231367 containerd[1636]: time="2026-01-27T13:03:23.230622141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cdccb5f6-wx4tb,Uid:d20e3435-24a0-4d45-b1d0-2db2610f07b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"854c4e15ca1b016b7af91c034eb54a55e1e5393c24547ca52c54b281d9f33e14\"" Jan 27 13:03:23.234806 systemd-networkd[1541]: cali8b4e03cef37: Gained IPv6LL Jan 27 13:03:23.317505 containerd[1636]: time="2026-01-27T13:03:23.317456663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-lgkj9,Uid:49dcb295-61bb-47ac-9721-51e5abeacfeb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d143e0b38cdbb14b2f08f7e3c72fc06821a8d91ad5354ed2841e5c7ee6de30ae\"" Jan 27 13:03:23.362812 systemd-networkd[1541]: cali0257fd6ed27: Gained IPv6LL Jan 27 13:03:23.413489 systemd-networkd[1541]: cali56fd02d56f3: Link UP Jan 27 13:03:23.415030 systemd-networkd[1541]: cali56fd02d56f3: Gained carrier Jan 27 13:03:23.440018 containerd[1636]: 2026-01-27 13:03:23.272 [INFO][4475] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 27 13:03:23.440018 containerd[1636]: 2026-01-27 13:03:23.303 [INFO][4475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0 calico-apiserver-675cb5c68f- calico-apiserver 31e32b52-76dd-4c4a-b037-c1818999e71b 883 0 2026-01-27 13:02:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:675cb5c68f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com calico-apiserver-675cb5c68f-6grht eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56fd02d56f3 [] [] }} ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-6grht" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-" Jan 27 13:03:23.440018 containerd[1636]: 2026-01-27 13:03:23.304 [INFO][4475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-6grht" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:23.440018 containerd[1636]: 2026-01-27 13:03:23.351 [INFO][4500] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" HandleID="k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.351 [INFO][4500] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" HandleID="k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"calico-apiserver-675cb5c68f-6grht", "timestamp":"2026-01-27 13:03:23.351575414 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.351 [INFO][4500] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.352 [INFO][4500] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.352 [INFO][4500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.362 [INFO][4500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.370 [INFO][4500] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.377 [INFO][4500] ipam/ipam.go 511: Trying affinity for 192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.381 [INFO][4500] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.440753 containerd[1636]: 2026-01-27 13:03:23.384 [INFO][4500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.441118 containerd[1636]: 2026-01-27 13:03:23.384 [INFO][4500] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.441118 containerd[1636]: 2026-01-27 13:03:23.386 [INFO][4500] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324 Jan 27 13:03:23.441118 containerd[1636]: 2026-01-27 13:03:23.392 [INFO][4500] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.441118 containerd[1636]: 2026-01-27 13:03:23.400 [INFO][4500] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.70/26] block=192.168.64.64/26 handle="k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.441118 containerd[1636]: 2026-01-27 13:03:23.401 [INFO][4500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.70/26] handle="k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:23.441118 containerd[1636]: 2026-01-27 13:03:23.401 [INFO][4500] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:23.441118 containerd[1636]: 2026-01-27 13:03:23.401 [INFO][4500] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.70/26] IPv6=[] ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" HandleID="k8s-pod-network.485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Workload="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:23.442071 containerd[1636]: 2026-01-27 13:03:23.405 [INFO][4475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-6grht" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0", GenerateName:"calico-apiserver-675cb5c68f-", Namespace:"calico-apiserver", SelfLink:"", UID:"31e32b52-76dd-4c4a-b037-c1818999e71b", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"675cb5c68f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-675cb5c68f-6grht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56fd02d56f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:23.442174 containerd[1636]: 2026-01-27 13:03:23.406 [INFO][4475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.70/32] ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-6grht" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:23.442174 containerd[1636]: 2026-01-27 13:03:23.406 [INFO][4475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56fd02d56f3 ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-6grht" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:23.442174 containerd[1636]: 2026-01-27 13:03:23.415 [INFO][4475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-6grht" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:23.442371 containerd[1636]: 2026-01-27 13:03:23.415 [INFO][4475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-6grht" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0", GenerateName:"calico-apiserver-675cb5c68f-", Namespace:"calico-apiserver", SelfLink:"", UID:"31e32b52-76dd-4c4a-b037-c1818999e71b", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"675cb5c68f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324", Pod:"calico-apiserver-675cb5c68f-6grht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56fd02d56f3", MAC:"56:ee:3e:56:21:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:23.442548 containerd[1636]: 2026-01-27 13:03:23.434 [INFO][4475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" Namespace="calico-apiserver" Pod="calico-apiserver-675cb5c68f-6grht" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-calico--apiserver--675cb5c68f--6grht-eth0" Jan 27 13:03:23.464035 containerd[1636]: time="2026-01-27T13:03:23.463927516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:23.465607 containerd[1636]: time="2026-01-27T13:03:23.465110721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 13:03:23.465607 containerd[1636]: time="2026-01-27T13:03:23.465214522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:23.465918 kubelet[2952]: E0127 13:03:23.465850 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 13:03:23.466493 kubelet[2952]: E0127 13:03:23.465954 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 13:03:23.468265 containerd[1636]: time="2026-01-27T13:03:23.466711271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 13:03:23.471938 containerd[1636]: time="2026-01-27T13:03:23.471870682Z" level=info msg="connecting to shim 485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324" address="unix:///run/containerd/s/a979c229a475c996ec4b7dfd58c4e8140024a6c9e97c9a4dfedd66953097ec9d" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:23.477764 kubelet[2952]: E0127 13:03:23.477586 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf79d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:23.523834 systemd[1]: Started cri-containerd-485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324.scope - libcontainer container 485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324. Jan 27 13:03:23.545000 audit: BPF prog-id=204 op=LOAD Jan 27 13:03:23.546000 audit: BPF prog-id=205 op=LOAD Jan 27 13:03:23.546000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438356537636238353639333865343165306165323137363039666434 Jan 27 13:03:23.546000 audit: BPF prog-id=205 op=UNLOAD Jan 27 13:03:23.546000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438356537636238353639333865343165306165323137363039666434 Jan 27 13:03:23.546000 audit: BPF prog-id=206 op=LOAD Jan 27 13:03:23.546000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438356537636238353639333865343165306165323137363039666434 Jan 27 13:03:23.546000 audit: BPF prog-id=207 op=LOAD Jan 27 13:03:23.546000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438356537636238353639333865343165306165323137363039666434 Jan 27 13:03:23.546000 audit: BPF prog-id=207 op=UNLOAD Jan 27 13:03:23.546000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438356537636238353639333865343165306165323137363039666434 Jan 27 13:03:23.546000 audit: BPF prog-id=206 op=UNLOAD Jan 27 13:03:23.546000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438356537636238353639333865343165306165323137363039666434 Jan 27 13:03:23.547000 audit: BPF prog-id=208 op=LOAD Jan 27 13:03:23.547000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4520 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:23.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438356537636238353639333865343165306165323137363039666434 Jan 27 13:03:23.555809 systemd-networkd[1541]: cali89d09fe8299: Gained IPv6LL Jan 27 13:03:23.603399 containerd[1636]: time="2026-01-27T13:03:23.603339230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-675cb5c68f-6grht,Uid:31e32b52-76dd-4c4a-b037-c1818999e71b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"485e7cb856938e41e0ae217609fd45ff3e3ea04eae6e276626ef592587ecb324\"" Jan 27 13:03:23.682737 systemd-networkd[1541]: cali6178fe79854: Gained IPv6LL Jan 27 13:03:23.794291 containerd[1636]: time="2026-01-27T13:03:23.794226447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:23.796861 containerd[1636]: time="2026-01-27T13:03:23.796284050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 13:03:23.796861 containerd[1636]: time="2026-01-27T13:03:23.796396514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:23.796962 kubelet[2952]: E0127 13:03:23.796553 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:03:23.796962 kubelet[2952]: E0127 13:03:23.796611 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:03:23.797714 kubelet[2952]: E0127 13:03:23.797158 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f16e2e6836fb414cba60abb811528f7a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tnq8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd56df978-nczgt_calico-system(8569123c-abee-43a6-aac4-12a05912eeb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:23.797868 containerd[1636]: time="2026-01-27T13:03:23.797262802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 13:03:24.172835 containerd[1636]: time="2026-01-27T13:03:24.172723764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:24.176146 containerd[1636]: time="2026-01-27T13:03:24.175584831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 13:03:24.176146 containerd[1636]: time="2026-01-27T13:03:24.175658976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:24.176853 kubelet[2952]: E0127 13:03:24.176537 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 13:03:24.177026 kubelet[2952]: E0127 13:03:24.176982 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 13:03:24.178007 kubelet[2952]: E0127 13:03:24.177769 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m72dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kpqqc_calico-system(edbc19c4-c5a2-4875-9a7b-5c829dca568c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:24.178270 containerd[1636]: time="2026-01-27T13:03:24.178016039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 13:03:24.179335 kubelet[2952]: E0127 13:03:24.179298 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:03:24.198384 kubelet[2952]: E0127 13:03:24.198078 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:03:24.277000 audit[4643]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:24.277000 audit[4643]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd833b3ea0 a2=0 a3=7ffd833b3e8c items=0 ppid=3095 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:24.284000 audit[4643]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:24.284000 audit[4643]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd833b3ea0 a2=0 a3=0 items=0 ppid=3095 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:24.509253 containerd[1636]: time="2026-01-27T13:03:24.508083049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:24.510446 containerd[1636]: time="2026-01-27T13:03:24.510400354Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 13:03:24.510848 containerd[1636]: time="2026-01-27T13:03:24.510587651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:24.512579 kubelet[2952]: E0127 13:03:24.511420 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 13:03:24.512579 kubelet[2952]: E0127 13:03:24.511511 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 13:03:24.513167 containerd[1636]: time="2026-01-27T13:03:24.512126975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 13:03:24.514105 kubelet[2952]: E0127 13:03:24.513751 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfrmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cdccb5f6-wx4tb_calico-system(d20e3435-24a0-4d45-b1d0-2db2610f07b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:24.516538 kubelet[2952]: E0127 13:03:24.516410 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:03:24.770756 systemd-networkd[1541]: calia804a566670: Gained IPv6LL Jan 27 13:03:24.830025 containerd[1636]: time="2026-01-27T13:03:24.829967631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:24.831155 containerd[1636]: time="2026-01-27T13:03:24.831106141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 13:03:24.831291 containerd[1636]: time="2026-01-27T13:03:24.831217669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:24.831859 kubelet[2952]: E0127 13:03:24.831562 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:03:24.831859 kubelet[2952]: E0127 13:03:24.831643 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:03:24.832258 containerd[1636]: time="2026-01-27T13:03:24.832185402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 13:03:24.832890 kubelet[2952]: E0127 13:03:24.832513 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8xtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-675cb5c68f-lgkj9_calico-apiserver(49dcb295-61bb-47ac-9721-51e5abeacfeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:24.833890 kubelet[2952]: E0127 13:03:24.833736 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:03:24.842000 audit: BPF prog-id=209 op=LOAD Jan 27 13:03:24.842000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd1c80290 a2=98 a3=1fffffffffffffff items=0 ppid=4585 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.842000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 13:03:24.843000 audit: BPF prog-id=209 op=UNLOAD Jan 27 13:03:24.843000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffd1c80260 a3=0 items=0 ppid=4585 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.843000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 13:03:24.843000 audit: BPF prog-id=210 op=LOAD Jan 27 13:03:24.843000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd1c80170 a2=94 a3=3 items=0 ppid=4585 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.843000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 13:03:24.844000 audit: BPF prog-id=210 op=UNLOAD Jan 27 13:03:24.844000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffd1c80170 a2=94 a3=3 items=0 ppid=4585 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.844000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 13:03:24.844000 audit: BPF prog-id=211 op=LOAD Jan 27 13:03:24.844000 audit[4678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd1c801b0 a2=94 a3=7fffd1c80390 items=0 ppid=4585 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.844000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 13:03:24.844000 audit: BPF prog-id=211 op=UNLOAD Jan 27 13:03:24.844000 audit[4678]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffd1c801b0 a2=94 a3=7fffd1c80390 items=0 ppid=4585 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.844000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 27 13:03:24.847000 audit: BPF prog-id=212 op=LOAD Jan 27 13:03:24.847000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc7d16c660 a2=98 a3=3 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.847000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:24.847000 audit: BPF prog-id=212 op=UNLOAD Jan 27 13:03:24.847000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc7d16c630 a3=0 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.847000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:24.848000 audit: BPF prog-id=213 op=LOAD Jan 27 13:03:24.848000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7d16c450 a2=94 a3=54428f items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.848000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:24.848000 audit: BPF prog-id=213 op=UNLOAD Jan 27 13:03:24.848000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7d16c450 a2=94 a3=54428f items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.848000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:24.848000 audit: BPF prog-id=214 op=LOAD Jan 27 13:03:24.848000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7d16c480 a2=94 a3=2 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.848000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:24.848000 audit: BPF prog-id=214 op=UNLOAD Jan 27 13:03:24.848000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7d16c480 a2=0 a3=2 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:24.848000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.110587 kernel: kauditd_printk_skb: 179 callbacks suppressed Jan 27 13:03:25.111682 kernel: audit: type=1334 audit(1769519005.097:646): prog-id=215 op=LOAD Jan 27 13:03:25.097000 audit: BPF prog-id=215 op=LOAD Jan 27 13:03:25.097000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7d16c340 a2=94 a3=1 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.125690 kernel: audit: type=1300 audit(1769519005.097:646): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7d16c340 a2=94 a3=1 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.132547 kernel: audit: type=1327 audit(1769519005.097:646): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.109000 audit: BPF prog-id=215 op=UNLOAD Jan 27 13:03:25.141894 kernel: audit: type=1334 audit(1769519005.109:647): prog-id=215 op=UNLOAD Jan 27 13:03:25.142064 kernel: audit: type=1300 audit(1769519005.109:647): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7d16c340 a2=94 a3=1 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.109000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7d16c340 a2=94 a3=1 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.109000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.149692 kernel: audit: type=1327 audit(1769519005.109:647): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.150549 containerd[1636]: time="2026-01-27T13:03:25.150429661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:25.152575 containerd[1636]: time="2026-01-27T13:03:25.152258241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 13:03:25.152575 containerd[1636]: time="2026-01-27T13:03:25.152449861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:25.153567 kubelet[2952]: E0127 13:03:25.152965 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 13:03:25.148000 audit: BPF prog-id=216 op=LOAD Jan 27 13:03:25.158645 kernel: audit: type=1334 audit(1769519005.148:648): prog-id=216 op=LOAD Jan 27 13:03:25.157461 systemd-networkd[1541]: cali56fd02d56f3: Gained IPv6LL Jan 27 13:03:25.148000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7d16c330 a2=94 a3=4 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.165190 kernel: audit: type=1300 audit(1769519005.148:648): arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7d16c330 a2=94 a3=4 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.148000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.168346 kernel: audit: type=1327 audit(1769519005.148:648): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.168499 kernel: audit: type=1334 audit(1769519005.148:649): prog-id=216 op=UNLOAD Jan 27 13:03:25.148000 audit: BPF prog-id=216 op=UNLOAD Jan 27 13:03:25.172000 kubelet[2952]: E0127 13:03:25.153130 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 13:03:25.172732 kubelet[2952]: E0127 13:03:25.171643 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf79d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:25.174185 kubelet[2952]: E0127 13:03:25.173552 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:03:25.174303 containerd[1636]: time="2026-01-27T13:03:25.173846658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 13:03:25.148000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc7d16c330 a2=0 a3=4 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.148000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.151000 audit: BPF prog-id=217 op=LOAD Jan 27 13:03:25.151000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc7d16c190 a2=94 a3=5 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.151000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.153000 audit: BPF prog-id=217 op=UNLOAD Jan 27 13:03:25.153000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc7d16c190 a2=0 a3=5 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.153000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.153000 audit: BPF prog-id=218 op=LOAD Jan 27 13:03:25.153000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7d16c3b0 a2=94 a3=6 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.153000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.153000 audit: BPF prog-id=218 op=UNLOAD Jan 27 13:03:25.153000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc7d16c3b0 a2=0 a3=6 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.153000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.157000 audit: BPF prog-id=219 op=LOAD Jan 27 13:03:25.157000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7d16bb60 a2=94 a3=88 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.157000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.157000 audit: BPF prog-id=220 op=LOAD Jan 27 13:03:25.157000 audit[4679]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc7d16b9e0 a2=94 a3=2 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.157000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.157000 audit: BPF prog-id=220 op=UNLOAD Jan 27 13:03:25.157000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc7d16ba10 a2=0 a3=7ffc7d16bb10 items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.157000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.164000 audit: BPF prog-id=219 op=UNLOAD Jan 27 13:03:25.164000 audit[4679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1a091d10 a2=0 a3=9b10626404cd7efc items=0 ppid=4585 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.164000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 27 13:03:25.200000 audit: BPF prog-id=221 op=LOAD Jan 27 13:03:25.200000 audit[4684]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3b330690 a2=98 a3=1999999999999999 items=0 ppid=4585 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.200000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 13:03:25.200000 audit: BPF prog-id=221 op=UNLOAD Jan 27 13:03:25.200000 audit[4684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff3b330660 a3=0 items=0 ppid=4585 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.200000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 13:03:25.200000 audit: BPF prog-id=222 op=LOAD Jan 27 13:03:25.200000 audit[4684]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3b330570 a2=94 a3=ffff items=0 ppid=4585 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.200000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 13:03:25.200000 audit: BPF prog-id=222 op=UNLOAD Jan 27 13:03:25.200000 audit[4684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff3b330570 a2=94 a3=ffff items=0 ppid=4585 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.200000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 13:03:25.200000 audit: BPF prog-id=223 op=LOAD Jan 27 13:03:25.200000 audit[4684]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff3b3305b0 a2=94 a3=7fff3b330790 items=0 ppid=4585 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.200000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 13:03:25.200000 audit: BPF prog-id=223 op=UNLOAD Jan 27 13:03:25.200000 audit[4684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff3b3305b0 a2=94 a3=7fff3b330790 items=0 ppid=4585 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.200000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 27 13:03:25.212159 kubelet[2952]: E0127 13:03:25.211993 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:03:25.214489 kubelet[2952]: E0127 13:03:25.212355 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:03:25.215297 kubelet[2952]: E0127 13:03:25.214447 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:03:25.217064 kubelet[2952]: E0127 13:03:25.216894 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:03:25.343000 audit[4697]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:25.343000 audit[4697]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff1a38d2b0 a2=0 a3=7fff1a38d29c items=0 ppid=3095 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:25.347000 audit[4697]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4697 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:25.347000 audit[4697]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff1a38d2b0 a2=0 a3=0 items=0 ppid=3095 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:25.416755 systemd-networkd[1541]: vxlan.calico: Link UP Jan 27 13:03:25.416767 systemd-networkd[1541]: vxlan.calico: Gained carrier Jan 27 13:03:25.452000 audit: BPF prog-id=224 op=LOAD Jan 27 13:03:25.452000 audit[4710]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff193b3800 a2=98 a3=20 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.452000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.454000 audit: BPF prog-id=224 op=UNLOAD Jan 27 13:03:25.454000 audit[4710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff193b37d0 a3=0 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.454000 audit: BPF prog-id=225 op=LOAD Jan 27 13:03:25.454000 audit[4710]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff193b3610 a2=94 a3=54428f items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.454000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.456000 audit: BPF prog-id=225 op=UNLOAD Jan 27 13:03:25.456000 audit[4710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff193b3610 a2=94 a3=54428f items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.456000 audit: BPF prog-id=226 op=LOAD Jan 27 13:03:25.456000 audit[4710]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff193b3640 a2=94 a3=2 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.456000 audit: BPF prog-id=226 op=UNLOAD Jan 27 13:03:25.456000 audit[4710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff193b3640 a2=0 a3=2 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.456000 audit: BPF prog-id=227 op=LOAD Jan 27 13:03:25.456000 audit[4710]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff193b33f0 a2=94 a3=4 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.456000 audit: BPF prog-id=227 op=UNLOAD Jan 27 13:03:25.456000 audit[4710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff193b33f0 a2=94 a3=4 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.456000 audit: BPF prog-id=228 op=LOAD Jan 27 13:03:25.456000 audit[4710]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff193b34f0 a2=94 a3=7fff193b3670 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.456000 audit: BPF prog-id=228 op=UNLOAD Jan 27 13:03:25.456000 audit[4710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff193b34f0 a2=0 a3=7fff193b3670 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.461000 audit: BPF prog-id=229 op=LOAD Jan 27 13:03:25.461000 audit[4710]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff193b2c20 a2=94 a3=2 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.461000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.462000 audit: BPF prog-id=229 op=UNLOAD Jan 27 13:03:25.462000 audit[4710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff193b2c20 a2=0 a3=2 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.462000 audit: BPF prog-id=230 op=LOAD Jan 27 13:03:25.462000 audit[4710]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff193b2d20 a2=94 a3=30 items=0 ppid=4585 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.462000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 27 13:03:25.481000 audit: BPF prog-id=231 op=LOAD Jan 27 13:03:25.481000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc7f071450 a2=98 a3=0 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.482000 audit: BPF prog-id=231 op=UNLOAD Jan 27 13:03:25.482000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc7f071420 a3=0 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.482000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.484000 audit: BPF prog-id=232 op=LOAD Jan 27 13:03:25.484000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7f071240 a2=94 a3=54428f items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.484000 audit: BPF prog-id=232 op=UNLOAD Jan 27 13:03:25.484000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7f071240 a2=94 a3=54428f items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.484000 audit: BPF prog-id=233 op=LOAD Jan 27 13:03:25.484000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7f071270 a2=94 a3=2 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.484000 audit: BPF prog-id=233 op=UNLOAD Jan 27 13:03:25.484000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7f071270 a2=0 a3=2 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.484000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.500861 containerd[1636]: time="2026-01-27T13:03:25.500793853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:25.504757 containerd[1636]: time="2026-01-27T13:03:25.504378173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 13:03:25.504757 containerd[1636]: time="2026-01-27T13:03:25.504493315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:25.505455 kubelet[2952]: E0127 13:03:25.505327 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:03:25.507436 kubelet[2952]: E0127 13:03:25.505491 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:03:25.507436 kubelet[2952]: E0127 13:03:25.506462 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzkh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-675cb5c68f-6grht_calico-apiserver(31e32b52-76dd-4c4a-b037-c1818999e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:25.508152 kubelet[2952]: E0127 13:03:25.507850 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:03:25.510844 containerd[1636]: time="2026-01-27T13:03:25.510586060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 13:03:25.761000 audit: BPF prog-id=234 op=LOAD Jan 27 13:03:25.761000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7f071130 a2=94 a3=1 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.761000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.762000 audit: BPF prog-id=234 op=UNLOAD Jan 27 13:03:25.762000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7f071130 a2=94 a3=1 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.762000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.780000 audit: BPF prog-id=235 op=LOAD Jan 27 13:03:25.780000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7f071120 a2=94 a3=4 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.780000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.780000 audit: BPF prog-id=235 op=UNLOAD Jan 27 13:03:25.780000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc7f071120 a2=0 a3=4 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.780000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.781000 audit: BPF prog-id=236 op=LOAD Jan 27 13:03:25.781000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc7f070f80 a2=94 a3=5 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.781000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.781000 audit: BPF prog-id=236 op=UNLOAD Jan 27 13:03:25.781000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc7f070f80 a2=0 a3=5 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.781000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.781000 audit: BPF prog-id=237 op=LOAD Jan 27 13:03:25.781000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7f0711a0 a2=94 a3=6 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.781000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.781000 audit: BPF prog-id=237 op=UNLOAD Jan 27 13:03:25.781000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc7f0711a0 a2=0 a3=6 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.781000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.783000 audit: BPF prog-id=238 op=LOAD Jan 27 13:03:25.783000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7f070950 a2=94 a3=88 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.783000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.784000 audit: BPF prog-id=239 op=LOAD Jan 27 13:03:25.784000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc7f0707d0 a2=94 a3=2 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.784000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.785000 audit: BPF prog-id=239 op=UNLOAD Jan 27 13:03:25.785000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc7f070800 a2=0 a3=7ffc7f070900 items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.785000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.785000 audit: BPF prog-id=238 op=UNLOAD Jan 27 13:03:25.785000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7b81d10 a2=0 a3=e6505ff921f0b21a items=0 ppid=4585 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.785000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 27 13:03:25.795000 audit: BPF prog-id=230 op=UNLOAD Jan 27 13:03:25.795000 audit[4585]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000d20f00 a2=0 a3=0 items=0 ppid=4563 pid=4585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.795000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 27 13:03:25.831978 containerd[1636]: time="2026-01-27T13:03:25.831890861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:25.833896 containerd[1636]: time="2026-01-27T13:03:25.833758315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 13:03:25.833896 containerd[1636]: time="2026-01-27T13:03:25.833834840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:25.835082 kubelet[2952]: E0127 13:03:25.834976 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:03:25.835468 kubelet[2952]: E0127 13:03:25.835078 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:03:25.835468 kubelet[2952]: E0127 13:03:25.835245 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tnq8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5dd56df978-nczgt_calico-system(8569123c-abee-43a6-aac4-12a05912eeb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:25.836532 kubelet[2952]: E0127 13:03:25.836463 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dd56df978-nczgt" podUID="8569123c-abee-43a6-aac4-12a05912eeb0" Jan 27 13:03:25.949000 audit[4752]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=4752 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:25.949000 audit[4752]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdf461e460 a2=0 a3=7ffdf461e44c items=0 ppid=4585 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.949000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:25.958000 audit[4754]: NETFILTER_CFG table=mangle:126 family=2 entries=16 op=nft_register_chain pid=4754 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:25.958000 audit[4754]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffeb6a59480 a2=0 a3=7ffeb6a5946c items=0 ppid=4585 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.958000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:25.967000 audit[4753]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4753 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:25.967000 audit[4753]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe9b84e070 a2=0 a3=7ffe9b84e05c items=0 ppid=4585 pid=4753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.967000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:25.973000 audit[4757]: NETFILTER_CFG table=filter:128 family=2 entries=263 op=nft_register_chain pid=4757 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:25.973000 audit[4757]: SYSCALL arch=c000003e syscall=46 success=yes exit=155896 a0=3 a1=7fff8e5b4730 a2=0 a3=7fff8e5b471c items=0 ppid=4585 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:25.973000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:26.213922 containerd[1636]: time="2026-01-27T13:03:26.213774907Z" level=info msg="StopPodSandbox for \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\"" Jan 27 13:03:26.216254 kubelet[2952]: E0127 13:03:26.215187 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:03:26.258257 systemd[1]: cri-containerd-9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90.scope: Deactivated successfully. Jan 27 13:03:26.261000 audit: BPF prog-id=179 op=UNLOAD Jan 27 13:03:26.261000 audit: BPF prog-id=184 op=UNLOAD Jan 27 13:03:26.274302 containerd[1636]: time="2026-01-27T13:03:26.274124010Z" level=info msg="received sandbox exit event container_id:\"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" id:\"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" exit_status:137 exited_at:{seconds:1769519006 nanos:266588481}" monitor_name=podsandbox Jan 27 13:03:26.301000 audit[4785]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:26.301000 audit[4785]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffddd77faa0 a2=0 a3=7ffddd77fa8c items=0 ppid=3095 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:26.301000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:26.304000 audit[4785]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:26.304000 audit[4785]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffddd77faa0 a2=0 a3=0 items=0 ppid=3095 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:26.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:26.328000 audit[4793]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:26.328000 audit[4793]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcbba4d960 a2=0 a3=7ffcbba4d94c items=0 ppid=3095 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:26.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:26.337000 audit[4793]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:26.337000 audit[4793]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcbba4d960 a2=0 a3=0 items=0 ppid=3095 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:26.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:26.341091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90-rootfs.mount: Deactivated successfully. Jan 27 13:03:26.353116 containerd[1636]: time="2026-01-27T13:03:26.352758261Z" level=info msg="shim disconnected" id=9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90 namespace=k8s.io Jan 27 13:03:26.353116 containerd[1636]: time="2026-01-27T13:03:26.352815779Z" level=info msg="cleaning up after shim disconnected" id=9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90 namespace=k8s.io Jan 27 13:03:26.353116 containerd[1636]: time="2026-01-27T13:03:26.352831726Z" level=info msg="cleaning up dead shim" id=9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90 namespace=k8s.io Jan 27 13:03:26.432540 containerd[1636]: time="2026-01-27T13:03:26.432070981Z" level=info msg="received sandbox container exit event sandbox_id:\"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" exit_status:137 exited_at:{seconds:1769519006 nanos:266588481}" monitor_name=criService Jan 27 13:03:26.432875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90-shm.mount: Deactivated successfully. Jan 27 13:03:26.520772 systemd-networkd[1541]: cali8b4e03cef37: Link DOWN Jan 27 13:03:26.520791 systemd-networkd[1541]: cali8b4e03cef37: Lost carrier Jan 27 13:03:26.569000 audit[4834]: NETFILTER_CFG table=filter:133 family=2 entries=71 op=nft_register_rule pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:26.569000 audit[4834]: SYSCALL arch=c000003e syscall=46 success=yes exit=8164 a0=3 a1=7ffdbc77bb70 a2=0 a3=7ffdbc77bb5c items=0 ppid=4585 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:26.569000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:26.571000 audit[4834]: NETFILTER_CFG table=filter:134 family=2 entries=8 op=nft_unregister_chain pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:26.571000 audit[4834]: SYSCALL arch=c000003e syscall=46 success=yes exit=1136 a0=3 a1=7ffdbc77bb70 a2=0 a3=55671e49a000 items=0 ppid=4585 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:26.571000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.517 [INFO][4818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.518 [INFO][4818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" iface="eth0" netns="/var/run/netns/cni-2346408b-2158-eb8f-8cfe-b7f61cb6a684" Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.519 [INFO][4818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" iface="eth0" netns="/var/run/netns/cni-2346408b-2158-eb8f-8cfe-b7f61cb6a684" Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.526 [INFO][4818] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" after=8.087265ms iface="eth0" netns="/var/run/netns/cni-2346408b-2158-eb8f-8cfe-b7f61cb6a684" Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.526 [INFO][4818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.526 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.594 [INFO][4825] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.595 [INFO][4825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:26.689635 containerd[1636]: 2026-01-27 13:03:26.595 [INFO][4825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:26.692788 containerd[1636]: 2026-01-27 13:03:26.679 [INFO][4825] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:26.692788 containerd[1636]: 2026-01-27 13:03:26.679 [INFO][4825] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:03:26.692788 containerd[1636]: 2026-01-27 13:03:26.681 [INFO][4825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:26.692788 containerd[1636]: 2026-01-27 13:03:26.686 [INFO][4818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:03:26.693890 containerd[1636]: time="2026-01-27T13:03:26.693116291Z" level=info msg="TearDown network for sandbox \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" successfully" Jan 27 13:03:26.693890 containerd[1636]: time="2026-01-27T13:03:26.693198171Z" level=info msg="StopPodSandbox for \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" returns successfully" Jan 27 13:03:26.696143 systemd[1]: run-netns-cni\x2d2346408b\x2d2158\x2deb8f\x2d8cfe\x2db7f61cb6a684.mount: Deactivated successfully. Jan 27 13:03:26.821378 kubelet[2952]: I0127 13:03:26.819936 2952 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnq8t\" (UniqueName: \"kubernetes.io/projected/8569123c-abee-43a6-aac4-12a05912eeb0-kube-api-access-tnq8t\") pod \"8569123c-abee-43a6-aac4-12a05912eeb0\" (UID: \"8569123c-abee-43a6-aac4-12a05912eeb0\") " Jan 27 13:03:26.822024 kubelet[2952]: I0127 13:03:26.821987 2952 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8569123c-abee-43a6-aac4-12a05912eeb0-whisker-ca-bundle\") pod \"8569123c-abee-43a6-aac4-12a05912eeb0\" (UID: \"8569123c-abee-43a6-aac4-12a05912eeb0\") " Jan 27 13:03:26.822440 kubelet[2952]: I0127 13:03:26.822197 2952 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8569123c-abee-43a6-aac4-12a05912eeb0-whisker-backend-key-pair\") pod \"8569123c-abee-43a6-aac4-12a05912eeb0\" (UID: \"8569123c-abee-43a6-aac4-12a05912eeb0\") " Jan 27 13:03:26.833411 systemd[1]: var-lib-kubelet-pods-8569123c\x2dabee\x2d43a6\x2daac4\x2d12a05912eeb0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 27 13:03:26.836109 kubelet[2952]: I0127 13:03:26.831737 2952 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8569123c-abee-43a6-aac4-12a05912eeb0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8569123c-abee-43a6-aac4-12a05912eeb0" (UID: "8569123c-abee-43a6-aac4-12a05912eeb0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 27 13:03:26.836109 kubelet[2952]: I0127 13:03:26.827836 2952 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8569123c-abee-43a6-aac4-12a05912eeb0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8569123c-abee-43a6-aac4-12a05912eeb0" (UID: "8569123c-abee-43a6-aac4-12a05912eeb0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 27 13:03:26.843274 systemd[1]: var-lib-kubelet-pods-8569123c\x2dabee\x2d43a6\x2daac4\x2d12a05912eeb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtnq8t.mount: Deactivated successfully. Jan 27 13:03:26.846332 kubelet[2952]: I0127 13:03:26.846257 2952 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8569123c-abee-43a6-aac4-12a05912eeb0-kube-api-access-tnq8t" (OuterVolumeSpecName: "kube-api-access-tnq8t") pod "8569123c-abee-43a6-aac4-12a05912eeb0" (UID: "8569123c-abee-43a6-aac4-12a05912eeb0"). InnerVolumeSpecName "kube-api-access-tnq8t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 27 13:03:26.928223 kubelet[2952]: I0127 13:03:26.928088 2952 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8569123c-abee-43a6-aac4-12a05912eeb0-whisker-backend-key-pair\") on node \"srv-4nwk8.gb1.brightbox.com\" DevicePath \"\"" Jan 27 13:03:26.928223 kubelet[2952]: I0127 13:03:26.928149 2952 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tnq8t\" (UniqueName: \"kubernetes.io/projected/8569123c-abee-43a6-aac4-12a05912eeb0-kube-api-access-tnq8t\") on node \"srv-4nwk8.gb1.brightbox.com\" DevicePath \"\"" Jan 27 13:03:26.928223 kubelet[2952]: I0127 13:03:26.928171 2952 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8569123c-abee-43a6-aac4-12a05912eeb0-whisker-ca-bundle\") on node \"srv-4nwk8.gb1.brightbox.com\" DevicePath \"\"" Jan 27 13:03:27.011646 systemd-networkd[1541]: vxlan.calico: Gained IPv6LL Jan 27 13:03:27.228295 systemd[1]: Removed slice kubepods-besteffort-pod8569123c_abee_43a6_aac4_12a05912eeb0.slice - libcontainer container kubepods-besteffort-pod8569123c_abee_43a6_aac4_12a05912eeb0.slice. Jan 27 13:03:27.367250 systemd[1]: Created slice kubepods-besteffort-pod7c2a8eb8_de11_4d21_a4ea_93f79258d54d.slice - libcontainer container kubepods-besteffort-pod7c2a8eb8_de11_4d21_a4ea_93f79258d54d.slice. Jan 27 13:03:27.372000 audit[4841]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:27.372000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffef06c970 a2=0 a3=7fffef06c95c items=0 ppid=3095 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:27.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:27.378000 audit[4841]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:27.378000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffef06c970 a2=0 a3=0 items=0 ppid=3095 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:27.378000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:27.533659 kubelet[2952]: I0127 13:03:27.533381 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c2a8eb8-de11-4d21-a4ea-93f79258d54d-whisker-ca-bundle\") pod \"whisker-7dbc4fc484-72nqj\" (UID: \"7c2a8eb8-de11-4d21-a4ea-93f79258d54d\") " pod="calico-system/whisker-7dbc4fc484-72nqj" Jan 27 13:03:27.538077 kubelet[2952]: I0127 13:03:27.537681 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf8nh\" (UniqueName: \"kubernetes.io/projected/7c2a8eb8-de11-4d21-a4ea-93f79258d54d-kube-api-access-jf8nh\") pod \"whisker-7dbc4fc484-72nqj\" (UID: \"7c2a8eb8-de11-4d21-a4ea-93f79258d54d\") " pod="calico-system/whisker-7dbc4fc484-72nqj" Jan 27 13:03:27.538077 kubelet[2952]: I0127 13:03:27.537760 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c2a8eb8-de11-4d21-a4ea-93f79258d54d-whisker-backend-key-pair\") pod \"whisker-7dbc4fc484-72nqj\" (UID: \"7c2a8eb8-de11-4d21-a4ea-93f79258d54d\") " pod="calico-system/whisker-7dbc4fc484-72nqj" Jan 27 13:03:27.678544 containerd[1636]: time="2026-01-27T13:03:27.678366242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dbc4fc484-72nqj,Uid:7c2a8eb8-de11-4d21-a4ea-93f79258d54d,Namespace:calico-system,Attempt:0,}" Jan 27 13:03:27.711555 kubelet[2952]: I0127 13:03:27.711418 2952 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8569123c-abee-43a6-aac4-12a05912eeb0" path="/var/lib/kubelet/pods/8569123c-abee-43a6-aac4-12a05912eeb0/volumes" Jan 27 13:03:27.880735 systemd-networkd[1541]: cali918f18916cc: Link UP Jan 27 13:03:27.881487 systemd-networkd[1541]: cali918f18916cc: Gained carrier Jan 27 13:03:27.903748 containerd[1636]: 2026-01-27 13:03:27.760 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0 whisker-7dbc4fc484- calico-system 7c2a8eb8-de11-4d21-a4ea-93f79258d54d 1004 0 2026-01-27 13:03:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dbc4fc484 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com whisker-7dbc4fc484-72nqj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali918f18916cc [] [] }} ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Namespace="calico-system" Pod="whisker-7dbc4fc484-72nqj" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-" Jan 27 13:03:27.903748 containerd[1636]: 2026-01-27 13:03:27.760 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Namespace="calico-system" Pod="whisker-7dbc4fc484-72nqj" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" Jan 27 13:03:27.903748 containerd[1636]: 2026-01-27 13:03:27.818 [INFO][4858] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" HandleID="k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.818 [INFO][4858] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" HandleID="k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb260), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"whisker-7dbc4fc484-72nqj", "timestamp":"2026-01-27 13:03:27.818659136 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.818 [INFO][4858] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.819 [INFO][4858] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.819 [INFO][4858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.831 [INFO][4858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.837 [INFO][4858] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.843 [INFO][4858] ipam/ipam.go 511: Trying affinity for 192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.846 [INFO][4858] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.905130 containerd[1636]: 2026-01-27 13:03:27.850 [INFO][4858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.906477 containerd[1636]: 2026-01-27 13:03:27.850 [INFO][4858] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.906477 containerd[1636]: 2026-01-27 13:03:27.852 [INFO][4858] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9 Jan 27 13:03:27.906477 containerd[1636]: 2026-01-27 13:03:27.860 [INFO][4858] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.906477 containerd[1636]: 2026-01-27 13:03:27.868 [INFO][4858] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.71/26] block=192.168.64.64/26 handle="k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.906477 containerd[1636]: 2026-01-27 13:03:27.868 [INFO][4858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.71/26] handle="k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:27.906477 containerd[1636]: 2026-01-27 13:03:27.868 [INFO][4858] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:27.906477 containerd[1636]: 2026-01-27 13:03:27.868 [INFO][4858] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.71/26] IPv6=[] ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" HandleID="k8s-pod-network.a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" Jan 27 13:03:27.907296 containerd[1636]: 2026-01-27 13:03:27.872 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Namespace="calico-system" Pod="whisker-7dbc4fc484-72nqj" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0", GenerateName:"whisker-7dbc4fc484-", Namespace:"calico-system", SelfLink:"", UID:"7c2a8eb8-de11-4d21-a4ea-93f79258d54d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 3, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dbc4fc484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"whisker-7dbc4fc484-72nqj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali918f18916cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:27.907296 containerd[1636]: 2026-01-27 13:03:27.872 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.71/32] ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Namespace="calico-system" Pod="whisker-7dbc4fc484-72nqj" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" Jan 27 13:03:27.907729 containerd[1636]: 2026-01-27 13:03:27.873 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali918f18916cc ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Namespace="calico-system" Pod="whisker-7dbc4fc484-72nqj" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" Jan 27 13:03:27.907729 containerd[1636]: 2026-01-27 13:03:27.882 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Namespace="calico-system" Pod="whisker-7dbc4fc484-72nqj" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" Jan 27 13:03:27.907822 containerd[1636]: 2026-01-27 13:03:27.883 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Namespace="calico-system" Pod="whisker-7dbc4fc484-72nqj" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0", GenerateName:"whisker-7dbc4fc484-", Namespace:"calico-system", SelfLink:"", UID:"7c2a8eb8-de11-4d21-a4ea-93f79258d54d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 3, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dbc4fc484", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9", Pod:"whisker-7dbc4fc484-72nqj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali918f18916cc", MAC:"96:06:78:e4:8c:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:27.907914 containerd[1636]: 2026-01-27 13:03:27.898 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" Namespace="calico-system" Pod="whisker-7dbc4fc484-72nqj" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--7dbc4fc484--72nqj-eth0" Jan 27 13:03:27.949409 containerd[1636]: time="2026-01-27T13:03:27.949282028Z" level=info msg="connecting to shim a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9" address="unix:///run/containerd/s/13bc8f928abd0131a7a9909ebe0cec32e981e2dd4111af5030f2e54e964d8cca" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:27.963000 audit[4892]: NETFILTER_CFG table=filter:137 family=2 entries=73 op=nft_register_chain pid=4892 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:27.963000 audit[4892]: SYSCALL arch=c000003e syscall=46 success=yes exit=38824 a0=3 a1=7ffcfe4a9ed0 a2=0 a3=7ffcfe4a9ebc items=0 ppid=4585 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:27.963000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:28.007809 systemd[1]: Started cri-containerd-a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9.scope - libcontainer container a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9. Jan 27 13:03:28.169000 audit: BPF prog-id=240 op=LOAD Jan 27 13:03:28.170000 audit: BPF prog-id=241 op=LOAD Jan 27 13:03:28.170000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:28.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383136613366343339616435323633636365363039333134396134 Jan 27 13:03:28.170000 audit: BPF prog-id=241 op=UNLOAD Jan 27 13:03:28.170000 audit[4897]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:28.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383136613366343339616435323633636365363039333134396134 Jan 27 13:03:28.170000 audit: BPF prog-id=242 op=LOAD Jan 27 13:03:28.170000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:28.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383136613366343339616435323633636365363039333134396134 Jan 27 13:03:28.170000 audit: BPF prog-id=243 op=LOAD Jan 27 13:03:28.170000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:28.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383136613366343339616435323633636365363039333134396134 Jan 27 13:03:28.171000 audit: BPF prog-id=243 op=UNLOAD Jan 27 13:03:28.171000 audit[4897]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:28.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383136613366343339616435323633636365363039333134396134 Jan 27 13:03:28.171000 audit: BPF prog-id=242 op=UNLOAD Jan 27 13:03:28.171000 audit[4897]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:28.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383136613366343339616435323633636365363039333134396134 Jan 27 13:03:28.171000 audit: BPF prog-id=244 op=LOAD Jan 27 13:03:28.171000 audit[4897]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4884 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:28.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137383136613366343339616435323633636365363039333134396134 Jan 27 13:03:28.235791 containerd[1636]: time="2026-01-27T13:03:28.235735044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dbc4fc484-72nqj,Uid:7c2a8eb8-de11-4d21-a4ea-93f79258d54d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7816a3f439ad5263cce6093149a4390dff300279c4aec2ae40fc770d2e665b9\"" Jan 27 13:03:28.237729 containerd[1636]: time="2026-01-27T13:03:28.237699941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 13:03:28.571665 containerd[1636]: time="2026-01-27T13:03:28.571125387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:28.575112 containerd[1636]: time="2026-01-27T13:03:28.574942255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:28.575112 containerd[1636]: time="2026-01-27T13:03:28.575035082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 13:03:28.576205 kubelet[2952]: E0127 13:03:28.576131 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:03:28.576967 kubelet[2952]: E0127 13:03:28.576246 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:03:28.583898 kubelet[2952]: E0127 13:03:28.583787 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f16e2e6836fb414cba60abb811528f7a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jf8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dbc4fc484-72nqj_calico-system(7c2a8eb8-de11-4d21-a4ea-93f79258d54d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:28.591048 containerd[1636]: time="2026-01-27T13:03:28.590960295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 13:03:28.907681 containerd[1636]: time="2026-01-27T13:03:28.907437262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:28.908727 containerd[1636]: time="2026-01-27T13:03:28.908614794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 13:03:28.909403 containerd[1636]: time="2026-01-27T13:03:28.908752087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:28.909536 kubelet[2952]: E0127 13:03:28.908944 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:03:28.909536 kubelet[2952]: E0127 13:03:28.909026 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:03:28.909536 kubelet[2952]: E0127 13:03:28.909222 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jf8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dbc4fc484-72nqj_calico-system(7c2a8eb8-de11-4d21-a4ea-93f79258d54d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:28.910736 kubelet[2952]: E0127 13:03:28.910589 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:03:29.233671 kubelet[2952]: E0127 13:03:29.233593 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:03:29.273000 audit[4925]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=4925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:29.273000 audit[4925]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeab535e90 a2=0 a3=7ffeab535e7c items=0 ppid=3095 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:29.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:29.282000 audit[4925]: NETFILTER_CFG table=nat:139 family=2 entries=14 op=nft_register_rule pid=4925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:29.282000 audit[4925]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeab535e90 a2=0 a3=0 items=0 ppid=3095 pid=4925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:29.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:29.315653 systemd-networkd[1541]: cali918f18916cc: Gained IPv6LL Jan 27 13:03:30.234726 kubelet[2952]: E0127 13:03:30.232687 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:03:31.697497 containerd[1636]: time="2026-01-27T13:03:31.697399406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrljx,Uid:37d5fe65-0a05-46a2-aa2f-7ad5352003bd,Namespace:kube-system,Attempt:0,}" Jan 27 13:03:31.883917 systemd-networkd[1541]: calic8cf9862f66: Link UP Jan 27 13:03:31.885855 systemd-networkd[1541]: calic8cf9862f66: Gained carrier Jan 27 13:03:31.918831 containerd[1636]: 2026-01-27 13:03:31.769 [INFO][4933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0 coredns-668d6bf9bc- kube-system 37d5fe65-0a05-46a2-aa2f-7ad5352003bd 811 0 2026-01-27 13:02:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com coredns-668d6bf9bc-vrljx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic8cf9862f66 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrljx" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-" Jan 27 13:03:31.918831 containerd[1636]: 2026-01-27 13:03:31.769 [INFO][4933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrljx" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" Jan 27 13:03:31.918831 containerd[1636]: 2026-01-27 13:03:31.822 [INFO][4945] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" HandleID="k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Workload="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.822 [INFO][4945] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" HandleID="k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Workload="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cef50), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-vrljx", "timestamp":"2026-01-27 13:03:31.822378107 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.822 [INFO][4945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.822 [INFO][4945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.822 [INFO][4945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.834 [INFO][4945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.843 [INFO][4945] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.849 [INFO][4945] ipam/ipam.go 511: Trying affinity for 192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.851 [INFO][4945] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919203 containerd[1636]: 2026-01-27 13:03:31.854 [INFO][4945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919599 containerd[1636]: 2026-01-27 13:03:31.854 [INFO][4945] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919599 containerd[1636]: 2026-01-27 13:03:31.857 [INFO][4945] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9 Jan 27 13:03:31.919599 containerd[1636]: 2026-01-27 13:03:31.862 [INFO][4945] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919599 containerd[1636]: 2026-01-27 13:03:31.871 [INFO][4945] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.72/26] block=192.168.64.64/26 handle="k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919599 containerd[1636]: 2026-01-27 13:03:31.871 [INFO][4945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.72/26] handle="k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:31.919599 containerd[1636]: 2026-01-27 13:03:31.871 [INFO][4945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:31.919599 containerd[1636]: 2026-01-27 13:03:31.871 [INFO][4945] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.72/26] IPv6=[] ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" HandleID="k8s-pod-network.a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Workload="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" Jan 27 13:03:31.919920 containerd[1636]: 2026-01-27 13:03:31.876 [INFO][4933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrljx" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"37d5fe65-0a05-46a2-aa2f-7ad5352003bd", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-vrljx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8cf9862f66", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:31.919920 containerd[1636]: 2026-01-27 13:03:31.877 [INFO][4933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.72/32] ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrljx" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" Jan 27 13:03:31.919920 containerd[1636]: 2026-01-27 13:03:31.877 [INFO][4933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8cf9862f66 ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrljx" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" Jan 27 13:03:31.919920 containerd[1636]: 2026-01-27 13:03:31.886 [INFO][4933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrljx" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" Jan 27 13:03:31.919920 containerd[1636]: 2026-01-27 13:03:31.887 [INFO][4933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrljx" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"37d5fe65-0a05-46a2-aa2f-7ad5352003bd", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9", Pod:"coredns-668d6bf9bc-vrljx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8cf9862f66", MAC:"92:8c:1e:21:8f:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:31.919920 containerd[1636]: 2026-01-27 13:03:31.907 [INFO][4933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrljx" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vrljx-eth0" Jan 27 13:03:31.970483 containerd[1636]: time="2026-01-27T13:03:31.969786094Z" level=info msg="connecting to shim a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9" address="unix:///run/containerd/s/55a87120b03a7bf8ffeae83fb67a7d6b5ce71b4c713eda66d9e7a161ba661f31" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:32.002006 kernel: kauditd_printk_skb: 215 callbacks suppressed Jan 27 13:03:32.002331 kernel: audit: type=1325 audit(1769519011.988:723): table=filter:140 family=2 entries=62 op=nft_register_chain pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:31.988000 audit[4977]: NETFILTER_CFG table=filter:140 family=2 entries=62 op=nft_register_chain pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:31.988000 audit[4977]: SYSCALL arch=c000003e syscall=46 success=yes exit=28492 a0=3 a1=7ffd0dc1fe20 a2=0 a3=7ffd0dc1fe0c items=0 ppid=4585 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.008899 kernel: audit: type=1300 audit(1769519011.988:723): arch=c000003e syscall=46 success=yes exit=28492 a0=3 a1=7ffd0dc1fe20 a2=0 a3=7ffd0dc1fe0c items=0 ppid=4585 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.012746 kernel: audit: type=1327 audit(1769519011.988:723): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:31.988000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:32.035010 systemd[1]: Started cri-containerd-a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9.scope - libcontainer container a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9. Jan 27 13:03:32.059000 audit: BPF prog-id=245 op=LOAD Jan 27 13:03:32.061630 kernel: audit: type=1334 audit(1769519012.059:724): prog-id=245 op=LOAD Jan 27 13:03:32.061000 audit: BPF prog-id=246 op=LOAD Jan 27 13:03:32.074153 kernel: audit: type=1334 audit(1769519012.061:725): prog-id=246 op=LOAD Jan 27 13:03:32.074242 kernel: audit: type=1300 audit(1769519012.061:725): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.061000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.061000 audit: BPF prog-id=246 op=UNLOAD Jan 27 13:03:32.080664 kernel: audit: type=1327 audit(1769519012.061:725): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.080746 kernel: audit: type=1334 audit(1769519012.061:726): prog-id=246 op=UNLOAD Jan 27 13:03:32.082173 kernel: audit: type=1300 audit(1769519012.061:726): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.061000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.088232 kernel: audit: type=1327 audit(1769519012.061:726): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.062000 audit: BPF prog-id=247 op=LOAD Jan 27 13:03:32.062000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.063000 audit: BPF prog-id=248 op=LOAD Jan 27 13:03:32.063000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.063000 audit: BPF prog-id=248 op=UNLOAD Jan 27 13:03:32.063000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.063000 audit: BPF prog-id=247 op=UNLOAD Jan 27 13:03:32.063000 audit[4985]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.063000 audit: BPF prog-id=249 op=LOAD Jan 27 13:03:32.063000 audit[4985]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4971 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136626436653137316538616263303433353438373162643334663333 Jan 27 13:03:32.144951 containerd[1636]: time="2026-01-27T13:03:32.144858841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrljx,Uid:37d5fe65-0a05-46a2-aa2f-7ad5352003bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9\"" Jan 27 13:03:32.157899 containerd[1636]: time="2026-01-27T13:03:32.157848711Z" level=info msg="CreateContainer within sandbox \"a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 27 13:03:32.176816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872892535.mount: Deactivated successfully. Jan 27 13:03:32.180197 containerd[1636]: time="2026-01-27T13:03:32.180150052Z" level=info msg="Container 22623a9b1acc13fe7fd9afc4ea716c58bc279853c358faabd1e3935d93b9ffc9: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:03:32.190306 containerd[1636]: time="2026-01-27T13:03:32.190254129Z" level=info msg="CreateContainer within sandbox \"a6bd6e171e8abc04354871bd34f33c666d2a639e5a6ec756a115be74c10b9bd9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22623a9b1acc13fe7fd9afc4ea716c58bc279853c358faabd1e3935d93b9ffc9\"" Jan 27 13:03:32.191198 containerd[1636]: time="2026-01-27T13:03:32.191008029Z" level=info msg="StartContainer for \"22623a9b1acc13fe7fd9afc4ea716c58bc279853c358faabd1e3935d93b9ffc9\"" Jan 27 13:03:32.192930 containerd[1636]: time="2026-01-27T13:03:32.192851871Z" level=info msg="connecting to shim 22623a9b1acc13fe7fd9afc4ea716c58bc279853c358faabd1e3935d93b9ffc9" address="unix:///run/containerd/s/55a87120b03a7bf8ffeae83fb67a7d6b5ce71b4c713eda66d9e7a161ba661f31" protocol=ttrpc version=3 Jan 27 13:03:32.221886 systemd[1]: Started cri-containerd-22623a9b1acc13fe7fd9afc4ea716c58bc279853c358faabd1e3935d93b9ffc9.scope - libcontainer container 22623a9b1acc13fe7fd9afc4ea716c58bc279853c358faabd1e3935d93b9ffc9. Jan 27 13:03:32.250000 audit: BPF prog-id=250 op=LOAD Jan 27 13:03:32.251000 audit: BPF prog-id=251 op=LOAD Jan 27 13:03:32.251000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4971 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.251000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232363233613962316163633133666537666439616663346561373136 Jan 27 13:03:32.252000 audit: BPF prog-id=251 op=UNLOAD Jan 27 13:03:32.252000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4971 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232363233613962316163633133666537666439616663346561373136 Jan 27 13:03:32.252000 audit: BPF prog-id=252 op=LOAD Jan 27 13:03:32.252000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4971 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232363233613962316163633133666537666439616663346561373136 Jan 27 13:03:32.252000 audit: BPF prog-id=253 op=LOAD Jan 27 13:03:32.252000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4971 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232363233613962316163633133666537666439616663346561373136 Jan 27 13:03:32.252000 audit: BPF prog-id=253 op=UNLOAD Jan 27 13:03:32.252000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4971 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232363233613962316163633133666537666439616663346561373136 Jan 27 13:03:32.252000 audit: BPF prog-id=252 op=UNLOAD Jan 27 13:03:32.252000 audit[5011]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4971 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232363233613962316163633133666537666439616663346561373136 Jan 27 13:03:32.253000 audit: BPF prog-id=254 op=LOAD Jan 27 13:03:32.253000 audit[5011]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4971 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232363233613962316163633133666537666439616663346561373136 Jan 27 13:03:32.281440 containerd[1636]: time="2026-01-27T13:03:32.281387688Z" level=info msg="StartContainer for \"22623a9b1acc13fe7fd9afc4ea716c58bc279853c358faabd1e3935d93b9ffc9\" returns successfully" Jan 27 13:03:32.697013 containerd[1636]: time="2026-01-27T13:03:32.696938984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp4m,Uid:74f07544-3ec6-434d-a104-1f67b11cb370,Namespace:kube-system,Attempt:0,}" Jan 27 13:03:32.890666 systemd-networkd[1541]: cali7924aefe39b: Link UP Jan 27 13:03:32.892857 systemd-networkd[1541]: cali7924aefe39b: Gained carrier Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.767 [INFO][5043] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0 coredns-668d6bf9bc- kube-system 74f07544-3ec6-434d-a104-1f67b11cb370 814 0 2026-01-27 13:02:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-4nwk8.gb1.brightbox.com coredns-668d6bf9bc-cbp4m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7924aefe39b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp4m" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.767 [INFO][5043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp4m" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.826 [INFO][5055] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" HandleID="k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Workload="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.826 [INFO][5055] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" HandleID="k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Workload="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-4nwk8.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-cbp4m", "timestamp":"2026-01-27 13:03:32.826597328 +0000 UTC"}, Hostname:"srv-4nwk8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.827 [INFO][5055] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.827 [INFO][5055] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.827 [INFO][5055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-4nwk8.gb1.brightbox.com' Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.838 [INFO][5055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.848 [INFO][5055] ipam/ipam.go 394: Looking up existing affinities for host host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.854 [INFO][5055] ipam/ipam.go 511: Trying affinity for 192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.857 [INFO][5055] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.861 [INFO][5055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.64/26 host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.861 [INFO][5055] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.64.64/26 handle="k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.863 [INFO][5055] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.869 [INFO][5055] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.64.64/26 handle="k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.878 [INFO][5055] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.64.73/26] block=192.168.64.64/26 handle="k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.878 [INFO][5055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.73/26] handle="k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" host="srv-4nwk8.gb1.brightbox.com" Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.878 [INFO][5055] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:03:32.923763 containerd[1636]: 2026-01-27 13:03:32.878 [INFO][5055] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.64.73/26] IPv6=[] ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" HandleID="k8s-pod-network.f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Workload="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" Jan 27 13:03:32.926777 containerd[1636]: 2026-01-27 13:03:32.882 [INFO][5043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp4m" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"74f07544-3ec6-434d-a104-1f67b11cb370", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-cbp4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7924aefe39b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:32.926777 containerd[1636]: 2026-01-27 13:03:32.883 [INFO][5043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.73/32] ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp4m" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" Jan 27 13:03:32.926777 containerd[1636]: 2026-01-27 13:03:32.883 [INFO][5043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7924aefe39b ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp4m" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" Jan 27 13:03:32.926777 containerd[1636]: 2026-01-27 13:03:32.892 [INFO][5043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp4m" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" Jan 27 13:03:32.926777 containerd[1636]: 2026-01-27 13:03:32.895 [INFO][5043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp4m" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"74f07544-3ec6-434d-a104-1f67b11cb370", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 27, 13, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-4nwk8.gb1.brightbox.com", ContainerID:"f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a", Pod:"coredns-668d6bf9bc-cbp4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7924aefe39b", MAC:"0e:ae:08:9a:de:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 27 13:03:32.926777 containerd[1636]: 2026-01-27 13:03:32.915 [INFO][5043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" Namespace="kube-system" Pod="coredns-668d6bf9bc-cbp4m" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cbp4m-eth0" Jan 27 13:03:32.974565 containerd[1636]: time="2026-01-27T13:03:32.973701103Z" level=info msg="connecting to shim f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a" address="unix:///run/containerd/s/96d0596f18e62cd69700d5e9b07129651cc0b5ed251593a4ee55a500aad8a33f" namespace=k8s.io protocol=ttrpc version=3 Jan 27 13:03:32.976000 audit[5078]: NETFILTER_CFG table=filter:141 family=2 entries=56 op=nft_register_chain pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 27 13:03:32.976000 audit[5078]: SYSCALL arch=c000003e syscall=46 success=yes exit=25096 a0=3 a1=7ffc6e04bde0 a2=0 a3=7ffc6e04bdcc items=0 ppid=4585 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:32.976000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 27 13:03:33.051804 systemd[1]: Started cri-containerd-f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a.scope - libcontainer container f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a. Jan 27 13:03:33.071000 audit: BPF prog-id=255 op=LOAD Jan 27 13:03:33.072000 audit: BPF prog-id=256 op=LOAD Jan 27 13:03:33.072000 audit[5092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5080 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637366361616239303264353133636637383030366636646631353461 Jan 27 13:03:33.073000 audit: BPF prog-id=256 op=UNLOAD Jan 27 13:03:33.073000 audit[5092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5080 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637366361616239303264353133636637383030366636646631353461 Jan 27 13:03:33.073000 audit: BPF prog-id=257 op=LOAD Jan 27 13:03:33.073000 audit[5092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5080 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637366361616239303264353133636637383030366636646631353461 Jan 27 13:03:33.073000 audit: BPF prog-id=258 op=LOAD Jan 27 13:03:33.073000 audit[5092]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5080 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637366361616239303264353133636637383030366636646631353461 Jan 27 13:03:33.074000 audit: BPF prog-id=258 op=UNLOAD Jan 27 13:03:33.074000 audit[5092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5080 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637366361616239303264353133636637383030366636646631353461 Jan 27 13:03:33.074000 audit: BPF prog-id=257 op=UNLOAD Jan 27 13:03:33.074000 audit[5092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5080 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637366361616239303264353133636637383030366636646631353461 Jan 27 13:03:33.074000 audit: BPF prog-id=259 op=LOAD Jan 27 13:03:33.074000 audit[5092]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5080 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637366361616239303264353133636637383030366636646631353461 Jan 27 13:03:33.127773 containerd[1636]: time="2026-01-27T13:03:33.127688400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cbp4m,Uid:74f07544-3ec6-434d-a104-1f67b11cb370,Namespace:kube-system,Attempt:0,} returns sandbox id \"f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a\"" Jan 27 13:03:33.134556 containerd[1636]: time="2026-01-27T13:03:33.134484783Z" level=info msg="CreateContainer within sandbox \"f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 27 13:03:33.151621 containerd[1636]: time="2026-01-27T13:03:33.151476469Z" level=info msg="Container f45f4fe1576113227c0faf33085ef673cd9a39503c98ef7507693b975f531547: CDI devices from CRI Config.CDIDevices: []" Jan 27 13:03:33.160009 containerd[1636]: time="2026-01-27T13:03:33.159931519Z" level=info msg="CreateContainer within sandbox \"f76caab902d513cf78006f6df154a4a39709495319d8c51e1f6929d866fcd47a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f45f4fe1576113227c0faf33085ef673cd9a39503c98ef7507693b975f531547\"" Jan 27 13:03:33.161727 containerd[1636]: time="2026-01-27T13:03:33.160805514Z" level=info msg="StartContainer for \"f45f4fe1576113227c0faf33085ef673cd9a39503c98ef7507693b975f531547\"" Jan 27 13:03:33.162358 containerd[1636]: time="2026-01-27T13:03:33.162325341Z" level=info msg="connecting to shim f45f4fe1576113227c0faf33085ef673cd9a39503c98ef7507693b975f531547" address="unix:///run/containerd/s/96d0596f18e62cd69700d5e9b07129651cc0b5ed251593a4ee55a500aad8a33f" protocol=ttrpc version=3 Jan 27 13:03:33.196834 systemd[1]: Started cri-containerd-f45f4fe1576113227c0faf33085ef673cd9a39503c98ef7507693b975f531547.scope - libcontainer container f45f4fe1576113227c0faf33085ef673cd9a39503c98ef7507693b975f531547. Jan 27 13:03:33.223000 audit: BPF prog-id=260 op=LOAD Jan 27 13:03:33.225000 audit: BPF prog-id=261 op=LOAD Jan 27 13:03:33.225000 audit[5122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5080 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634356634666531353736313133323237633066616633333038356566 Jan 27 13:03:33.225000 audit: BPF prog-id=261 op=UNLOAD Jan 27 13:03:33.225000 audit[5122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5080 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634356634666531353736313133323237633066616633333038356566 Jan 27 13:03:33.225000 audit: BPF prog-id=262 op=LOAD Jan 27 13:03:33.225000 audit[5122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5080 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634356634666531353736313133323237633066616633333038356566 Jan 27 13:03:33.225000 audit: BPF prog-id=263 op=LOAD Jan 27 13:03:33.225000 audit[5122]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5080 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634356634666531353736313133323237633066616633333038356566 Jan 27 13:03:33.225000 audit: BPF prog-id=263 op=UNLOAD Jan 27 13:03:33.225000 audit[5122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5080 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634356634666531353736313133323237633066616633333038356566 Jan 27 13:03:33.225000 audit: BPF prog-id=262 op=UNLOAD Jan 27 13:03:33.225000 audit[5122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5080 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634356634666531353736313133323237633066616633333038356566 Jan 27 13:03:33.225000 audit: BPF prog-id=264 op=LOAD Jan 27 13:03:33.225000 audit[5122]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5080 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634356634666531353736313133323237633066616633333038356566 Jan 27 13:03:33.303596 containerd[1636]: time="2026-01-27T13:03:33.303537676Z" level=info msg="StartContainer for \"f45f4fe1576113227c0faf33085ef673cd9a39503c98ef7507693b975f531547\" returns successfully" Jan 27 13:03:33.341876 kubelet[2952]: I0127 13:03:33.339371 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vrljx" podStartSLOduration=62.338500764 podStartE2EDuration="1m2.338500764s" podCreationTimestamp="2026-01-27 13:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:03:33.307924616 +0000 UTC m=+67.882266821" watchObservedRunningTime="2026-01-27 13:03:33.338500764 +0000 UTC m=+67.912842927" Jan 27 13:03:33.379000 audit[5153]: NETFILTER_CFG table=filter:142 family=2 entries=20 op=nft_register_rule pid=5153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:33.379000 audit[5153]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe2a9202d0 a2=0 a3=7ffe2a9202bc items=0 ppid=3095 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:33.386000 audit[5153]: NETFILTER_CFG table=nat:143 family=2 entries=14 op=nft_register_rule pid=5153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:33.386000 audit[5153]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe2a9202d0 a2=0 a3=0 items=0 ppid=3095 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.386000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:33.439000 audit[5158]: NETFILTER_CFG table=filter:144 family=2 entries=17 op=nft_register_rule pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:33.439000 audit[5158]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc438cd710 a2=0 a3=7ffc438cd6fc items=0 ppid=3095 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:33.447000 audit[5158]: NETFILTER_CFG table=nat:145 family=2 entries=35 op=nft_register_chain pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:33.447000 audit[5158]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc438cd710 a2=0 a3=7ffc438cd6fc items=0 ppid=3095 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:33.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:33.794728 systemd-networkd[1541]: calic8cf9862f66: Gained IPv6LL Jan 27 13:03:34.312798 kubelet[2952]: I0127 13:03:34.312655 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cbp4m" podStartSLOduration=63.312630141 podStartE2EDuration="1m3.312630141s" podCreationTimestamp="2026-01-27 13:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 13:03:34.309601985 +0000 UTC m=+68.883944172" watchObservedRunningTime="2026-01-27 13:03:34.312630141 +0000 UTC m=+68.886972335" Jan 27 13:03:34.472000 audit[5160]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=5160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:34.472000 audit[5160]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffcfb1f9d0 a2=0 a3=7fffcfb1f9bc items=0 ppid=3095 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:34.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:34.478000 audit[5160]: NETFILTER_CFG table=nat:147 family=2 entries=44 op=nft_register_rule pid=5160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:34.478000 audit[5160]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffcfb1f9d0 a2=0 a3=7fffcfb1f9bc items=0 ppid=3095 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:34.478000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:34.627806 systemd-networkd[1541]: cali7924aefe39b: Gained IPv6LL Jan 27 13:03:35.359000 audit[5162]: NETFILTER_CFG table=filter:148 family=2 entries=14 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:35.359000 audit[5162]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe44669800 a2=0 a3=7ffe446697ec items=0 ppid=3095 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:35.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:35.377000 audit[5162]: NETFILTER_CFG table=nat:149 family=2 entries=56 op=nft_register_chain pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:03:35.377000 audit[5162]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe44669800 a2=0 a3=7ffe446697ec items=0 ppid=3095 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:35.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:03:35.698289 containerd[1636]: time="2026-01-27T13:03:35.698005597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 13:03:36.015212 containerd[1636]: time="2026-01-27T13:03:36.014729203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:36.016379 containerd[1636]: time="2026-01-27T13:03:36.016065634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 13:03:36.016552 containerd[1636]: time="2026-01-27T13:03:36.016153394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:36.016940 kubelet[2952]: E0127 13:03:36.016820 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:03:36.017584 kubelet[2952]: E0127 13:03:36.016963 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:03:36.017584 kubelet[2952]: E0127 13:03:36.017335 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8xtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-675cb5c68f-lgkj9_calico-apiserver(49dcb295-61bb-47ac-9721-51e5abeacfeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:36.018744 kubelet[2952]: E0127 13:03:36.018694 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:03:36.698108 containerd[1636]: time="2026-01-27T13:03:36.697299329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 13:03:37.024332 containerd[1636]: time="2026-01-27T13:03:37.024015831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:37.025768 containerd[1636]: time="2026-01-27T13:03:37.025721025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 13:03:37.026032 containerd[1636]: time="2026-01-27T13:03:37.025868740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:37.026369 kubelet[2952]: E0127 13:03:37.026306 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 13:03:37.027193 kubelet[2952]: E0127 13:03:37.026806 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 13:03:37.027193 kubelet[2952]: E0127 13:03:37.027040 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfrmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cdccb5f6-wx4tb_calico-system(d20e3435-24a0-4d45-b1d0-2db2610f07b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:37.029034 kubelet[2952]: E0127 13:03:37.028910 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:03:39.701885 containerd[1636]: time="2026-01-27T13:03:39.701536387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 13:03:40.062344 containerd[1636]: time="2026-01-27T13:03:40.062143558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:40.063786 containerd[1636]: time="2026-01-27T13:03:40.063670522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 13:03:40.063786 containerd[1636]: time="2026-01-27T13:03:40.063748075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:40.064595 kubelet[2952]: E0127 13:03:40.064018 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 13:03:40.064595 kubelet[2952]: E0127 13:03:40.064157 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 13:03:40.067731 kubelet[2952]: E0127 13:03:40.067639 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf79d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:40.071617 containerd[1636]: time="2026-01-27T13:03:40.071386449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 13:03:40.391878 containerd[1636]: time="2026-01-27T13:03:40.391181494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:40.393447 containerd[1636]: time="2026-01-27T13:03:40.393263480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 13:03:40.393447 containerd[1636]: time="2026-01-27T13:03:40.393416883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:40.393817 kubelet[2952]: E0127 13:03:40.393717 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 13:03:40.393925 kubelet[2952]: E0127 13:03:40.393849 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 13:03:40.394753 kubelet[2952]: E0127 13:03:40.394664 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf79d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:40.398535 kubelet[2952]: E0127 13:03:40.396692 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:03:40.698731 containerd[1636]: time="2026-01-27T13:03:40.698673577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 13:03:41.014395 containerd[1636]: time="2026-01-27T13:03:41.014068511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:41.016583 containerd[1636]: time="2026-01-27T13:03:41.016372390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 13:03:41.016583 containerd[1636]: time="2026-01-27T13:03:41.016432668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:41.016983 kubelet[2952]: E0127 13:03:41.016931 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 13:03:41.017190 kubelet[2952]: E0127 13:03:41.017133 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 13:03:41.017698 kubelet[2952]: E0127 13:03:41.017494 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m72dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kpqqc_calico-system(edbc19c4-c5a2-4875-9a7b-5c829dca568c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:41.018120 containerd[1636]: time="2026-01-27T13:03:41.017885471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 13:03:41.018759 kubelet[2952]: E0127 13:03:41.018700 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:03:41.494207 containerd[1636]: time="2026-01-27T13:03:41.493857532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:41.495387 containerd[1636]: time="2026-01-27T13:03:41.495344752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 13:03:41.495651 containerd[1636]: time="2026-01-27T13:03:41.495414707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:41.496128 kubelet[2952]: E0127 13:03:41.496002 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:03:41.497149 kubelet[2952]: E0127 13:03:41.496097 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:03:41.497149 kubelet[2952]: E0127 13:03:41.496954 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzkh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-675cb5c68f-6grht_calico-apiserver(31e32b52-76dd-4c4a-b037-c1818999e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:41.498420 kubelet[2952]: E0127 13:03:41.498323 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:03:42.695639 containerd[1636]: time="2026-01-27T13:03:42.695571188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 13:03:43.040066 containerd[1636]: time="2026-01-27T13:03:43.039703799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:43.041237 containerd[1636]: time="2026-01-27T13:03:43.041166709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 13:03:43.041340 containerd[1636]: time="2026-01-27T13:03:43.041284133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:43.041910 kubelet[2952]: E0127 13:03:43.041563 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:03:43.041910 kubelet[2952]: E0127 13:03:43.041636 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:03:43.041910 kubelet[2952]: E0127 13:03:43.041817 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f16e2e6836fb414cba60abb811528f7a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jf8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dbc4fc484-72nqj_calico-system(7c2a8eb8-de11-4d21-a4ea-93f79258d54d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:43.045017 containerd[1636]: time="2026-01-27T13:03:43.044949611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 13:03:43.373860 containerd[1636]: time="2026-01-27T13:03:43.373336882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:03:43.374763 containerd[1636]: time="2026-01-27T13:03:43.374715979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 13:03:43.374972 containerd[1636]: time="2026-01-27T13:03:43.374741860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 13:03:43.375316 kubelet[2952]: E0127 13:03:43.375178 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:03:43.375316 kubelet[2952]: E0127 13:03:43.375259 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:03:43.375588 kubelet[2952]: E0127 13:03:43.375420 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jf8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dbc4fc484-72nqj_calico-system(7c2a8eb8-de11-4d21-a4ea-93f79258d54d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 13:03:43.377017 kubelet[2952]: E0127 13:03:43.376947 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:03:48.696662 kubelet[2952]: E0127 13:03:48.696545 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:03:51.698396 kubelet[2952]: E0127 13:03:51.698311 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:03:52.697322 kubelet[2952]: E0127 13:03:52.697201 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:03:53.704269 kubelet[2952]: E0127 13:03:53.704195 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:03:55.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.66.190:22-68.220.241.50:35454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:03:55.050368 kernel: kauditd_printk_skb: 108 callbacks suppressed Jan 27 13:03:55.050489 kernel: audit: type=1130 audit(1769519035.037:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.66.190:22-68.220.241.50:35454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:03:55.039079 systemd[1]: Started sshd@9-10.230.66.190:22-68.220.241.50:35454.service - OpenSSH per-connection server daemon (68.220.241.50:35454). Jan 27 13:03:55.670000 audit[5217]: USER_ACCT pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:55.684971 kernel: audit: type=1101 audit(1769519035.670:766): pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:55.684161 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:03:55.689634 sshd[5217]: Accepted publickey for core from 68.220.241.50 port 35454 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:03:55.680000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:55.699537 kernel: audit: type=1103 audit(1769519035.680:767): pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:55.709948 kubelet[2952]: E0127 13:03:55.708863 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:03:55.710712 kernel: audit: type=1006 audit(1769519035.680:768): pid=5217 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 27 13:03:55.680000 audit[5217]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7057a6e0 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:55.720571 kernel: audit: type=1300 audit(1769519035.680:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7057a6e0 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:03:55.721001 systemd-logind[1615]: New session 13 of user core. Jan 27 13:03:55.680000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:03:55.728543 kernel: audit: type=1327 audit(1769519035.680:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:03:55.729882 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 27 13:03:55.738000 audit[5217]: USER_START pid=5217 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:55.747907 kernel: audit: type=1105 audit(1769519035.738:769): pid=5217 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:55.749000 audit[5221]: CRED_ACQ pid=5221 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:55.757542 kernel: audit: type=1103 audit(1769519035.749:770): pid=5221 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:56.655253 sshd[5221]: Connection closed by 68.220.241.50 port 35454 Jan 27 13:03:56.655087 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Jan 27 13:03:56.667000 audit[5217]: USER_END pid=5217 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:56.676076 kernel: audit: type=1106 audit(1769519036.667:771): pid=5217 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:56.667000 audit[5217]: CRED_DISP pid=5217 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:56.682668 kernel: audit: type=1104 audit(1769519036.667:772): pid=5217 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:03:56.680991 systemd[1]: sshd@9-10.230.66.190:22-68.220.241.50:35454.service: Deactivated successfully. Jan 27 13:03:56.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.66.190:22-68.220.241.50:35454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:03:56.690070 systemd[1]: session-13.scope: Deactivated successfully. Jan 27 13:03:56.694301 systemd-logind[1615]: Session 13 logged out. Waiting for processes to exit. Jan 27 13:03:56.697616 systemd-logind[1615]: Removed session 13. Jan 27 13:03:57.701535 kubelet[2952]: E0127 13:03:57.701412 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:04:01.772935 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 13:04:01.774039 kernel: audit: type=1130 audit(1769519041.761:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.66.190:22-68.220.241.50:35458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:01.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.66.190:22-68.220.241.50:35458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:01.762897 systemd[1]: Started sshd@10-10.230.66.190:22-68.220.241.50:35458.service - OpenSSH per-connection server daemon (68.220.241.50:35458). Jan 27 13:04:02.339000 audit[5239]: USER_ACCT pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.341817 sshd[5239]: Accepted publickey for core from 68.220.241.50 port 35458 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:02.346555 kernel: audit: type=1101 audit(1769519042.339:775): pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.345000 audit[5239]: CRED_ACQ pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.350740 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:02.352785 kernel: audit: type=1103 audit(1769519042.345:776): pid=5239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.352865 kernel: audit: type=1006 audit(1769519042.345:777): pid=5239 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 27 13:04:02.345000 audit[5239]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe63a56ff0 a2=3 a3=0 items=0 ppid=1 pid=5239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:02.360549 kernel: audit: type=1300 audit(1769519042.345:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe63a56ff0 a2=3 a3=0 items=0 ppid=1 pid=5239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:02.345000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:02.364548 kernel: audit: type=1327 audit(1769519042.345:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:02.367781 systemd-logind[1615]: New session 14 of user core. Jan 27 13:04:02.375271 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 27 13:04:02.379000 audit[5239]: USER_START pid=5239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.387713 kernel: audit: type=1105 audit(1769519042.379:778): pid=5239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.386000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.393562 kernel: audit: type=1103 audit(1769519042.386:779): pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.892717 sshd[5245]: Connection closed by 68.220.241.50 port 35458 Jan 27 13:04:02.893806 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:02.897000 audit[5239]: USER_END pid=5239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.909570 kernel: audit: type=1106 audit(1769519042.897:780): pid=5239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.897000 audit[5239]: CRED_DISP pid=5239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.916560 kernel: audit: type=1104 audit(1769519042.897:781): pid=5239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:02.917236 systemd[1]: sshd@10-10.230.66.190:22-68.220.241.50:35458.service: Deactivated successfully. Jan 27 13:04:02.920915 systemd[1]: session-14.scope: Deactivated successfully. Jan 27 13:04:02.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.66.190:22-68.220.241.50:35458 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:02.928035 systemd-logind[1615]: Session 14 logged out. Waiting for processes to exit. Jan 27 13:04:02.929922 systemd-logind[1615]: Removed session 14. Jan 27 13:04:03.698218 containerd[1636]: time="2026-01-27T13:04:03.698095467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 13:04:04.048999 containerd[1636]: time="2026-01-27T13:04:04.048778294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:04.050402 containerd[1636]: time="2026-01-27T13:04:04.050356287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 13:04:04.050809 containerd[1636]: time="2026-01-27T13:04:04.050472798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:04.050889 kubelet[2952]: E0127 13:04:04.050695 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:04:04.052079 kubelet[2952]: E0127 13:04:04.051208 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:04:04.052079 kubelet[2952]: E0127 13:04:04.051576 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8xtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-675cb5c68f-lgkj9_calico-apiserver(49dcb295-61bb-47ac-9721-51e5abeacfeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:04.053538 kubelet[2952]: E0127 13:04:04.053327 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:04:04.697312 containerd[1636]: time="2026-01-27T13:04:04.697130150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 13:04:05.051656 containerd[1636]: time="2026-01-27T13:04:05.051384430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:05.054059 containerd[1636]: time="2026-01-27T13:04:05.053949026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 13:04:05.055033 containerd[1636]: time="2026-01-27T13:04:05.054233816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:05.056175 kubelet[2952]: E0127 13:04:05.055317 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 13:04:05.056175 kubelet[2952]: E0127 13:04:05.055386 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 13:04:05.056175 kubelet[2952]: E0127 13:04:05.055580 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfrmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cdccb5f6-wx4tb_calico-system(d20e3435-24a0-4d45-b1d0-2db2610f07b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:05.057685 kubelet[2952]: E0127 13:04:05.057621 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:04:05.701054 containerd[1636]: time="2026-01-27T13:04:05.700998099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 13:04:06.024194 containerd[1636]: time="2026-01-27T13:04:06.023807866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:06.025857 containerd[1636]: time="2026-01-27T13:04:06.025727611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 13:04:06.026482 containerd[1636]: time="2026-01-27T13:04:06.025965988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:06.027212 kubelet[2952]: E0127 13:04:06.027066 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 13:04:06.027436 kubelet[2952]: E0127 13:04:06.027176 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 13:04:06.028269 kubelet[2952]: E0127 13:04:06.028024 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf79d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:06.030889 containerd[1636]: time="2026-01-27T13:04:06.030542420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 13:04:06.365364 containerd[1636]: time="2026-01-27T13:04:06.365152052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:06.367124 containerd[1636]: time="2026-01-27T13:04:06.366706371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 13:04:06.367124 containerd[1636]: time="2026-01-27T13:04:06.366749644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:06.367937 kubelet[2952]: E0127 13:04:06.367819 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 13:04:06.368338 kubelet[2952]: E0127 13:04:06.367943 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 13:04:06.369806 containerd[1636]: time="2026-01-27T13:04:06.369759023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 13:04:06.369919 kubelet[2952]: E0127 13:04:06.369816 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m72dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kpqqc_calico-system(edbc19c4-c5a2-4875-9a7b-5c829dca568c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:06.371378 kubelet[2952]: E0127 13:04:06.371294 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:04:06.685349 containerd[1636]: time="2026-01-27T13:04:06.685087411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:06.688674 containerd[1636]: time="2026-01-27T13:04:06.688642491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:06.689858 containerd[1636]: time="2026-01-27T13:04:06.689667634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 13:04:06.693318 kubelet[2952]: E0127 13:04:06.690568 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 13:04:06.693318 kubelet[2952]: E0127 13:04:06.692609 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 13:04:06.693318 kubelet[2952]: E0127 13:04:06.692822 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf79d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:06.694629 kubelet[2952]: E0127 13:04:06.694592 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:04:07.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.66.190:22-68.220.241.50:52046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:08.008791 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 13:04:08.008888 kernel: audit: type=1130 audit(1769519047.998:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.66.190:22-68.220.241.50:52046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:07.999858 systemd[1]: Started sshd@11-10.230.66.190:22-68.220.241.50:52046.service - OpenSSH per-connection server daemon (68.220.241.50:52046). Jan 27 13:04:08.592744 kernel: audit: type=1101 audit(1769519048.577:784): pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:08.577000 audit[5267]: USER_ACCT pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:08.593448 sshd[5267]: Accepted publickey for core from 68.220.241.50 port 52046 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:08.604051 kernel: audit: type=1103 audit(1769519048.595:785): pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:08.595000 audit[5267]: CRED_ACQ pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:08.603566 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:08.609645 kernel: audit: type=1006 audit(1769519048.595:786): pid=5267 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 27 13:04:08.595000 audit[5267]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45c5dbd0 a2=3 a3=0 items=0 ppid=1 pid=5267 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:08.615609 kernel: audit: type=1300 audit(1769519048.595:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe45c5dbd0 a2=3 a3=0 items=0 ppid=1 pid=5267 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:08.595000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:08.621551 kernel: audit: type=1327 audit(1769519048.595:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:08.631250 systemd-logind[1615]: New session 15 of user core. Jan 27 13:04:08.637789 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 27 13:04:08.655269 kernel: audit: type=1105 audit(1769519048.645:787): pid=5267 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:08.645000 audit[5267]: USER_START pid=5267 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:08.655000 audit[5271]: CRED_ACQ pid=5271 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:08.661774 kernel: audit: type=1103 audit(1769519048.655:788): pid=5271 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:08.699870 containerd[1636]: time="2026-01-27T13:04:08.699326807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 13:04:09.026863 containerd[1636]: time="2026-01-27T13:04:09.026509596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:09.029239 containerd[1636]: time="2026-01-27T13:04:09.029083983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 13:04:09.029239 containerd[1636]: time="2026-01-27T13:04:09.029096210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:09.029912 kubelet[2952]: E0127 13:04:09.029783 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:04:09.029912 kubelet[2952]: E0127 13:04:09.029865 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:04:09.032871 kubelet[2952]: E0127 13:04:09.030895 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzkh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-675cb5c68f-6grht_calico-apiserver(31e32b52-76dd-4c4a-b037-c1818999e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:09.032871 kubelet[2952]: E0127 13:04:09.032343 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:04:09.083718 sshd[5271]: Connection closed by 68.220.241.50 port 52046 Jan 27 13:04:09.084370 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:09.088000 audit[5267]: USER_END pid=5267 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:09.106852 kernel: audit: type=1106 audit(1769519049.088:789): pid=5267 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:09.108939 systemd[1]: sshd@11-10.230.66.190:22-68.220.241.50:52046.service: Deactivated successfully. Jan 27 13:04:09.117283 systemd[1]: session-15.scope: Deactivated successfully. Jan 27 13:04:09.123749 systemd-logind[1615]: Session 15 logged out. Waiting for processes to exit. Jan 27 13:04:09.088000 audit[5267]: CRED_DISP pid=5267 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:09.131582 kernel: audit: type=1104 audit(1769519049.088:790): pid=5267 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:09.133266 systemd-logind[1615]: Removed session 15. Jan 27 13:04:09.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.66.190:22-68.220.241.50:52046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:09.191519 systemd[1]: Started sshd@12-10.230.66.190:22-68.220.241.50:52052.service - OpenSSH per-connection server daemon (68.220.241.50:52052). Jan 27 13:04:09.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.66.190:22-68.220.241.50:52052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:09.699010 containerd[1636]: time="2026-01-27T13:04:09.698645172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 13:04:09.744000 audit[5285]: USER_ACCT pid=5285 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:09.746817 sshd[5285]: Accepted publickey for core from 68.220.241.50 port 52052 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:09.747000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:09.747000 audit[5285]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6946bee0 a2=3 a3=0 items=0 ppid=1 pid=5285 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:09.747000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:09.750887 sshd-session[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:09.763722 systemd-logind[1615]: New session 16 of user core. Jan 27 13:04:09.769863 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 27 13:04:09.774000 audit[5285]: USER_START pid=5285 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:09.780000 audit[5289]: CRED_ACQ pid=5289 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:10.128509 containerd[1636]: time="2026-01-27T13:04:10.128294648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:10.130110 containerd[1636]: time="2026-01-27T13:04:10.130016543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 13:04:10.130393 containerd[1636]: time="2026-01-27T13:04:10.130047836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:10.131177 kubelet[2952]: E0127 13:04:10.131105 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:04:10.132663 kubelet[2952]: E0127 13:04:10.131751 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:04:10.133220 kubelet[2952]: E0127 13:04:10.133056 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f16e2e6836fb414cba60abb811528f7a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jf8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dbc4fc484-72nqj_calico-system(7c2a8eb8-de11-4d21-a4ea-93f79258d54d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:10.154417 containerd[1636]: time="2026-01-27T13:04:10.154353302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 13:04:10.231589 sshd[5289]: Connection closed by 68.220.241.50 port 52052 Jan 27 13:04:10.233724 sshd-session[5285]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:10.235000 audit[5285]: USER_END pid=5285 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:10.235000 audit[5285]: CRED_DISP pid=5285 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:10.240441 systemd-logind[1615]: Session 16 logged out. Waiting for processes to exit. Jan 27 13:04:10.241058 systemd[1]: sshd@12-10.230.66.190:22-68.220.241.50:52052.service: Deactivated successfully. Jan 27 13:04:10.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.66.190:22-68.220.241.50:52052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:10.247737 systemd[1]: session-16.scope: Deactivated successfully. Jan 27 13:04:10.253122 systemd-logind[1615]: Removed session 16. Jan 27 13:04:10.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.66.190:22-68.220.241.50:52068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:10.337528 systemd[1]: Started sshd@13-10.230.66.190:22-68.220.241.50:52068.service - OpenSSH per-connection server daemon (68.220.241.50:52068). Jan 27 13:04:10.467588 containerd[1636]: time="2026-01-27T13:04:10.467495563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:10.468703 containerd[1636]: time="2026-01-27T13:04:10.468646515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 13:04:10.468803 containerd[1636]: time="2026-01-27T13:04:10.468775425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:10.469908 kubelet[2952]: E0127 13:04:10.469827 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:04:10.470433 kubelet[2952]: E0127 13:04:10.469912 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:04:10.470433 kubelet[2952]: E0127 13:04:10.470127 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jf8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dbc4fc484-72nqj_calico-system(7c2a8eb8-de11-4d21-a4ea-93f79258d54d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:10.471792 kubelet[2952]: E0127 13:04:10.471701 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:04:10.858000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:10.861708 sshd[5299]: Accepted publickey for core from 68.220.241.50 port 52068 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:10.862000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:10.862000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe031b27f0 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:10.862000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:10.866429 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:10.878932 systemd-logind[1615]: New session 17 of user core. Jan 27 13:04:10.885355 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 27 13:04:10.891000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:10.894000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:11.295333 sshd[5303]: Connection closed by 68.220.241.50 port 52068 Jan 27 13:04:11.296401 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:11.299000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:11.299000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:11.306718 systemd-logind[1615]: Session 17 logged out. Waiting for processes to exit. Jan 27 13:04:11.307234 systemd[1]: sshd@13-10.230.66.190:22-68.220.241.50:52068.service: Deactivated successfully. Jan 27 13:04:11.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.66.190:22-68.220.241.50:52068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:11.311058 systemd[1]: session-17.scope: Deactivated successfully. Jan 27 13:04:11.315061 systemd-logind[1615]: Removed session 17. Jan 27 13:04:16.403590 systemd[1]: Started sshd@14-10.230.66.190:22-68.220.241.50:54028.service - OpenSSH per-connection server daemon (68.220.241.50:54028). Jan 27 13:04:16.417099 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 27 13:04:16.417202 kernel: audit: type=1130 audit(1769519056.402:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.66.190:22-68.220.241.50:54028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:16.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.66.190:22-68.220.241.50:54028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:16.936000 audit[5316]: USER_ACCT pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:16.956254 kernel: audit: type=1101 audit(1769519056.936:811): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:16.956404 sshd[5316]: Accepted publickey for core from 68.220.241.50 port 54028 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:16.972541 kernel: audit: type=1103 audit(1769519056.955:812): pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:16.972747 kernel: audit: type=1006 audit(1769519056.959:813): pid=5316 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 27 13:04:16.955000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:16.970125 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:16.959000 audit[5316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff20e96590 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:16.982545 kernel: audit: type=1300 audit(1769519056.959:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff20e96590 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:16.959000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:16.991543 kernel: audit: type=1327 audit(1769519056.959:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:16.994737 systemd-logind[1615]: New session 18 of user core. Jan 27 13:04:17.003106 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 27 13:04:17.010000 audit[5316]: USER_START pid=5316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:17.018560 kernel: audit: type=1105 audit(1769519057.010:814): pid=5316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:17.023000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:17.029597 kernel: audit: type=1103 audit(1769519057.023:815): pid=5320 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:17.331139 sshd[5320]: Connection closed by 68.220.241.50 port 54028 Jan 27 13:04:17.332530 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:17.360006 kernel: audit: type=1106 audit(1769519057.335:816): pid=5316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:17.335000 audit[5316]: USER_END pid=5316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:17.355573 systemd[1]: sshd@14-10.230.66.190:22-68.220.241.50:54028.service: Deactivated successfully. Jan 27 13:04:17.363014 systemd[1]: session-18.scope: Deactivated successfully. Jan 27 13:04:17.366727 systemd-logind[1615]: Session 18 logged out. Waiting for processes to exit. Jan 27 13:04:17.369392 systemd-logind[1615]: Removed session 18. Jan 27 13:04:17.335000 audit[5316]: CRED_DISP pid=5316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:17.374705 kernel: audit: type=1104 audit(1769519057.335:817): pid=5316 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:17.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.66.190:22-68.220.241.50:54028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:17.701670 kubelet[2952]: E0127 13:04:17.701558 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:04:18.700815 kubelet[2952]: E0127 13:04:18.700412 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:04:18.701595 kubelet[2952]: E0127 13:04:18.701497 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:04:20.702780 kubelet[2952]: E0127 13:04:20.701416 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:04:21.698934 kubelet[2952]: E0127 13:04:21.698864 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:04:22.445067 systemd[1]: Started sshd@15-10.230.66.190:22-68.220.241.50:54032.service - OpenSSH per-connection server daemon (68.220.241.50:54032). Jan 27 13:04:22.458378 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 13:04:22.461849 kernel: audit: type=1130 audit(1769519062.443:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.66.190:22-68.220.241.50:54032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:22.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.66.190:22-68.220.241.50:54032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:23.087000 audit[5362]: USER_ACCT pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.095099 sshd[5362]: Accepted publickey for core from 68.220.241.50 port 54032 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:23.098766 kernel: audit: type=1101 audit(1769519063.087:820): pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.098949 kernel: audit: type=1103 audit(1769519063.092:821): pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.092000 audit[5362]: CRED_ACQ pid=5362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.100288 sshd-session[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:23.106538 kernel: audit: type=1006 audit(1769519063.092:822): pid=5362 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 27 13:04:23.092000 audit[5362]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc481a9e50 a2=3 a3=0 items=0 ppid=1 pid=5362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:23.115360 kernel: audit: type=1300 audit(1769519063.092:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc481a9e50 a2=3 a3=0 items=0 ppid=1 pid=5362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:23.115436 kernel: audit: type=1327 audit(1769519063.092:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:23.092000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:23.113351 systemd-logind[1615]: New session 19 of user core. Jan 27 13:04:23.121041 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 27 13:04:23.128000 audit[5362]: USER_START pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.136588 kernel: audit: type=1105 audit(1769519063.128:823): pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.137000 audit[5370]: CRED_ACQ pid=5370 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.144641 kernel: audit: type=1103 audit(1769519063.137:824): pid=5370 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.674272 sshd[5370]: Connection closed by 68.220.241.50 port 54032 Jan 27 13:04:23.677553 sshd-session[5362]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:23.684000 audit[5362]: USER_END pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.696333 kernel: audit: type=1106 audit(1769519063.684:825): pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.699602 systemd[1]: sshd@15-10.230.66.190:22-68.220.241.50:54032.service: Deactivated successfully. Jan 27 13:04:23.715546 kernel: audit: type=1104 audit(1769519063.684:826): pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.684000 audit[5362]: CRED_DISP pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:23.709834 systemd[1]: session-19.scope: Deactivated successfully. Jan 27 13:04:23.718402 kubelet[2952]: E0127 13:04:23.718251 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:04:23.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.66.190:22-68.220.241.50:54032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:23.723890 systemd-logind[1615]: Session 19 logged out. Waiting for processes to exit. Jan 27 13:04:23.726849 systemd-logind[1615]: Removed session 19. Jan 27 13:04:25.779386 containerd[1636]: time="2026-01-27T13:04:25.779305131Z" level=info msg="StopPodSandbox for \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\"" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.001 [WARNING][5390] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.002 [INFO][5390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.002 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" iface="eth0" netns="" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.002 [INFO][5390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.002 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.073 [INFO][5397] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.074 [INFO][5397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.074 [INFO][5397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.087 [WARNING][5397] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.088 [INFO][5397] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.091 [INFO][5397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:04:26.099660 containerd[1636]: 2026-01-27 13:04:26.096 [INFO][5390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:04:26.099660 containerd[1636]: time="2026-01-27T13:04:26.099405080Z" level=info msg="TearDown network for sandbox \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" successfully" Jan 27 13:04:26.100880 containerd[1636]: time="2026-01-27T13:04:26.099498734Z" level=info msg="StopPodSandbox for \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" returns successfully" Jan 27 13:04:26.218223 containerd[1636]: time="2026-01-27T13:04:26.218147946Z" level=info msg="RemovePodSandbox for \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\"" Jan 27 13:04:26.218432 containerd[1636]: time="2026-01-27T13:04:26.218245817Z" level=info msg="Forcibly stopping sandbox \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\"" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.291 [WARNING][5412] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" WorkloadEndpoint="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.291 [INFO][5412] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.291 [INFO][5412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" iface="eth0" netns="" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.291 [INFO][5412] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.291 [INFO][5412] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.340 [INFO][5420] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.341 [INFO][5420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.341 [INFO][5420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.352 [WARNING][5420] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.352 [INFO][5420] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" HandleID="k8s-pod-network.9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Workload="srv--4nwk8.gb1.brightbox.com-k8s-whisker--5dd56df978--nczgt-eth0" Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.354 [INFO][5420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 27 13:04:26.361624 containerd[1636]: 2026-01-27 13:04:26.358 [INFO][5412] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90" Jan 27 13:04:26.361624 containerd[1636]: time="2026-01-27T13:04:26.361081148Z" level=info msg="TearDown network for sandbox \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" successfully" Jan 27 13:04:26.400589 containerd[1636]: time="2026-01-27T13:04:26.400503402Z" level=info msg="Ensure that sandbox 9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90 in task-service has been cleanup successfully" Jan 27 13:04:26.513675 containerd[1636]: time="2026-01-27T13:04:26.513605700Z" level=info msg="RemovePodSandbox \"9ea8dafd269f2d77494bc2497b36c655774b19784c99d837b8c8c21096df2a90\" returns successfully" Jan 27 13:04:28.787457 systemd[1]: Started sshd@16-10.230.66.190:22-68.220.241.50:59752.service - OpenSSH per-connection server daemon (68.220.241.50:59752). Jan 27 13:04:28.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.66.190:22-68.220.241.50:59752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:28.797185 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 13:04:28.797701 kernel: audit: type=1130 audit(1769519068.786:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.66.190:22-68.220.241.50:59752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:29.400000 audit[5429]: USER_ACCT pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.414823 kernel: audit: type=1101 audit(1769519069.400:829): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.413591 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:29.416601 sshd[5429]: Accepted publickey for core from 68.220.241.50 port 59752 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:29.409000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.425547 kernel: audit: type=1103 audit(1769519069.409:830): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.428536 kernel: audit: type=1006 audit(1769519069.409:831): pid=5429 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 27 13:04:29.409000 audit[5429]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc05dd6d0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:29.434545 kernel: audit: type=1300 audit(1769519069.409:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc05dd6d0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:29.409000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:29.437965 kernel: audit: type=1327 audit(1769519069.409:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:29.442029 systemd-logind[1615]: New session 20 of user core. Jan 27 13:04:29.450033 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 27 13:04:29.458000 audit[5429]: USER_START pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.467556 kernel: audit: type=1105 audit(1769519069.458:832): pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.465000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.472541 kernel: audit: type=1103 audit(1769519069.465:833): pid=5433 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.703225 kubelet[2952]: E0127 13:04:29.702963 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:04:29.703225 kubelet[2952]: E0127 13:04:29.703108 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:04:29.906047 sshd[5433]: Connection closed by 68.220.241.50 port 59752 Jan 27 13:04:29.907091 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:29.912000 audit[5429]: USER_END pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.923610 kernel: audit: type=1106 audit(1769519069.912:834): pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.921936 systemd[1]: sshd@16-10.230.66.190:22-68.220.241.50:59752.service: Deactivated successfully. Jan 27 13:04:29.925911 systemd[1]: session-20.scope: Deactivated successfully. Jan 27 13:04:29.912000 audit[5429]: CRED_DISP pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.936637 kernel: audit: type=1104 audit(1769519069.912:835): pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:29.938491 systemd-logind[1615]: Session 20 logged out. Waiting for processes to exit. Jan 27 13:04:29.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.66.190:22-68.220.241.50:59752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:29.942259 systemd-logind[1615]: Removed session 20. Jan 27 13:04:31.697664 kubelet[2952]: E0127 13:04:31.697002 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:04:33.700321 kubelet[2952]: E0127 13:04:33.700216 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:04:35.018298 systemd[1]: Started sshd@17-10.230.66.190:22-68.220.241.50:39448.service - OpenSSH per-connection server daemon (68.220.241.50:39448). Jan 27 13:04:35.032155 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 13:04:35.032296 kernel: audit: type=1130 audit(1769519075.017:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.66.190:22-68.220.241.50:39448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:35.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.66.190:22-68.220.241.50:39448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:35.577000 audit[5447]: USER_ACCT pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:35.590310 sshd[5447]: Accepted publickey for core from 68.220.241.50 port 39448 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:35.594724 kernel: audit: type=1101 audit(1769519075.577:838): pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:35.596000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:35.601460 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:35.603676 kernel: audit: type=1103 audit(1769519075.596:839): pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:35.607536 kernel: audit: type=1006 audit(1769519075.596:840): pid=5447 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 27 13:04:35.596000 audit[5447]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefa2539b0 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:35.613656 kernel: audit: type=1300 audit(1769519075.596:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefa2539b0 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:35.596000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:35.620571 kernel: audit: type=1327 audit(1769519075.596:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:35.625629 systemd-logind[1615]: New session 21 of user core. Jan 27 13:04:35.632927 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 27 13:04:35.639000 audit[5447]: USER_START pid=5447 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:35.647586 kernel: audit: type=1105 audit(1769519075.639:841): pid=5447 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:35.648000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:35.656599 kernel: audit: type=1103 audit(1769519075.648:842): pid=5451 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:35.706713 kubelet[2952]: E0127 13:04:35.706507 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:04:35.708910 kubelet[2952]: E0127 13:04:35.708352 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:04:36.056550 sshd[5451]: Connection closed by 68.220.241.50 port 39448 Jan 27 13:04:36.057383 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:36.066000 audit[5447]: USER_END pid=5447 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:36.081664 kernel: audit: type=1106 audit(1769519076.066:843): pid=5447 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:36.066000 audit[5447]: CRED_DISP pid=5447 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:36.091561 kernel: audit: type=1104 audit(1769519076.066:844): pid=5447 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:36.092412 systemd[1]: sshd@17-10.230.66.190:22-68.220.241.50:39448.service: Deactivated successfully. Jan 27 13:04:36.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.66.190:22-68.220.241.50:39448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:36.098966 systemd[1]: session-21.scope: Deactivated successfully. Jan 27 13:04:36.101729 systemd-logind[1615]: Session 21 logged out. Waiting for processes to exit. Jan 27 13:04:36.104833 systemd-logind[1615]: Removed session 21. Jan 27 13:04:36.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.66.190:22-68.220.241.50:39456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:36.163485 systemd[1]: Started sshd@18-10.230.66.190:22-68.220.241.50:39456.service - OpenSSH per-connection server daemon (68.220.241.50:39456). Jan 27 13:04:36.725000 audit[5462]: USER_ACCT pid=5462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:36.727088 sshd[5462]: Accepted publickey for core from 68.220.241.50 port 39456 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:36.727000 audit[5462]: CRED_ACQ pid=5462 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:36.727000 audit[5462]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaa038890 a2=3 a3=0 items=0 ppid=1 pid=5462 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:36.727000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:36.731618 sshd-session[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:36.742297 systemd-logind[1615]: New session 22 of user core. Jan 27 13:04:36.748830 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 27 13:04:36.752000 audit[5462]: USER_START pid=5462 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:36.756000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:37.598163 sshd[5466]: Connection closed by 68.220.241.50 port 39456 Jan 27 13:04:37.600877 sshd-session[5462]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:37.611000 audit[5462]: USER_END pid=5462 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:37.611000 audit[5462]: CRED_DISP pid=5462 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:37.617441 systemd-logind[1615]: Session 22 logged out. Waiting for processes to exit. Jan 27 13:04:37.618830 systemd[1]: sshd@18-10.230.66.190:22-68.220.241.50:39456.service: Deactivated successfully. Jan 27 13:04:37.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.66.190:22-68.220.241.50:39456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:37.622985 systemd[1]: session-22.scope: Deactivated successfully. Jan 27 13:04:37.625888 systemd-logind[1615]: Removed session 22. Jan 27 13:04:37.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.66.190:22-68.220.241.50:39468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:37.708919 systemd[1]: Started sshd@19-10.230.66.190:22-68.220.241.50:39468.service - OpenSSH per-connection server daemon (68.220.241.50:39468). Jan 27 13:04:38.302969 sshd[5477]: Accepted publickey for core from 68.220.241.50 port 39468 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:38.301000 audit[5477]: USER_ACCT pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:38.303000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:38.303000 audit[5477]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9d8e67e0 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:38.303000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:38.307068 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:38.316799 systemd-logind[1615]: New session 23 of user core. Jan 27 13:04:38.323811 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 27 13:04:38.328000 audit[5477]: USER_START pid=5477 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:38.331000 audit[5481]: CRED_ACQ pid=5481 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:39.506000 audit[5491]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:04:39.506000 audit[5491]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd1f11f0c0 a2=0 a3=7ffd1f11f0ac items=0 ppid=3095 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:39.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:04:39.512000 audit[5491]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:04:39.512000 audit[5491]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd1f11f0c0 a2=0 a3=0 items=0 ppid=3095 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:39.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:04:39.543000 audit[5493]: NETFILTER_CFG table=filter:152 family=2 entries=38 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:04:39.543000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc6d2c55f0 a2=0 a3=7ffc6d2c55dc items=0 ppid=3095 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:39.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:04:39.546000 audit[5493]: NETFILTER_CFG table=nat:153 family=2 entries=20 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:04:39.546000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc6d2c55f0 a2=0 a3=0 items=0 ppid=3095 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:39.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:04:39.593555 sshd[5481]: Connection closed by 68.220.241.50 port 39468 Jan 27 13:04:39.595686 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:39.603000 audit[5477]: USER_END pid=5477 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:39.604000 audit[5477]: CRED_DISP pid=5477 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:39.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.66.190:22-68.220.241.50:39468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:39.614338 systemd[1]: sshd@19-10.230.66.190:22-68.220.241.50:39468.service: Deactivated successfully. Jan 27 13:04:39.616780 systemd-logind[1615]: Session 23 logged out. Waiting for processes to exit. Jan 27 13:04:39.620658 systemd[1]: session-23.scope: Deactivated successfully. Jan 27 13:04:39.626848 systemd-logind[1615]: Removed session 23. Jan 27 13:04:39.697739 systemd[1]: Started sshd@20-10.230.66.190:22-68.220.241.50:39480.service - OpenSSH per-connection server daemon (68.220.241.50:39480). Jan 27 13:04:39.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.66.190:22-68.220.241.50:39480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:40.258000 audit[5498]: USER_ACCT pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:40.267208 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 27 13:04:40.267436 kernel: audit: type=1101 audit(1769519080.258:869): pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:40.269058 sshd[5498]: Accepted publickey for core from 68.220.241.50 port 39480 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:40.272992 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:40.267000 audit[5498]: CRED_ACQ pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:40.283075 kernel: audit: type=1103 audit(1769519080.267:870): pid=5498 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:40.289859 kernel: audit: type=1006 audit(1769519080.267:871): pid=5498 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 27 13:04:40.267000 audit[5498]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6e9ca470 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:40.296032 kernel: audit: type=1300 audit(1769519080.267:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6e9ca470 a2=3 a3=0 items=0 ppid=1 pid=5498 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:40.300634 kernel: audit: type=1327 audit(1769519080.267:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:40.267000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:40.301676 systemd-logind[1615]: New session 24 of user core. Jan 27 13:04:40.308798 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 27 13:04:40.323619 kernel: audit: type=1105 audit(1769519080.314:872): pid=5498 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:40.314000 audit[5498]: USER_START pid=5498 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:40.323000 audit[5502]: CRED_ACQ pid=5502 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:40.330580 kernel: audit: type=1103 audit(1769519080.323:873): pid=5502 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:41.222420 sshd[5502]: Connection closed by 68.220.241.50 port 39480 Jan 27 13:04:41.222944 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:41.224000 audit[5498]: USER_END pid=5498 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:41.238570 kernel: audit: type=1106 audit(1769519081.224:874): pid=5498 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:41.244758 systemd[1]: sshd@20-10.230.66.190:22-68.220.241.50:39480.service: Deactivated successfully. Jan 27 13:04:41.249251 systemd[1]: session-24.scope: Deactivated successfully. Jan 27 13:04:41.224000 audit[5498]: CRED_DISP pid=5498 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:41.251552 systemd-logind[1615]: Session 24 logged out. Waiting for processes to exit. Jan 27 13:04:41.256863 kernel: audit: type=1104 audit(1769519081.224:875): pid=5498 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:41.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.66.190:22-68.220.241.50:39480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:41.261123 systemd-logind[1615]: Removed session 24. Jan 27 13:04:41.264551 kernel: audit: type=1131 audit(1769519081.243:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.66.190:22-68.220.241.50:39480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:41.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.66.190:22-68.220.241.50:39496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:41.333600 systemd[1]: Started sshd@21-10.230.66.190:22-68.220.241.50:39496.service - OpenSSH per-connection server daemon (68.220.241.50:39496). Jan 27 13:04:41.903000 audit[5512]: USER_ACCT pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:41.906096 sshd[5512]: Accepted publickey for core from 68.220.241.50 port 39496 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:41.906000 audit[5512]: CRED_ACQ pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:41.907000 audit[5512]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe28c0e190 a2=3 a3=0 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:41.907000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:41.909931 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:41.923360 systemd-logind[1615]: New session 25 of user core. Jan 27 13:04:41.928781 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 27 13:04:41.933000 audit[5512]: USER_START pid=5512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:41.937000 audit[5516]: CRED_ACQ pid=5516 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:42.351222 sshd[5516]: Connection closed by 68.220.241.50 port 39496 Jan 27 13:04:42.352141 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:42.353000 audit[5512]: USER_END pid=5512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:42.353000 audit[5512]: CRED_DISP pid=5512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:42.360196 systemd[1]: sshd@21-10.230.66.190:22-68.220.241.50:39496.service: Deactivated successfully. Jan 27 13:04:42.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.66.190:22-68.220.241.50:39496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:42.364280 systemd[1]: session-25.scope: Deactivated successfully. Jan 27 13:04:42.368584 systemd-logind[1615]: Session 25 logged out. Waiting for processes to exit. Jan 27 13:04:42.371378 systemd-logind[1615]: Removed session 25. Jan 27 13:04:42.698282 kubelet[2952]: E0127 13:04:42.697499 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:04:43.699642 kubelet[2952]: E0127 13:04:43.699438 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:04:44.696811 kubelet[2952]: E0127 13:04:44.696738 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:04:46.699993 kubelet[2952]: E0127 13:04:46.698912 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:04:46.701912 containerd[1636]: time="2026-01-27T13:04:46.699717637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 27 13:04:46.703796 kubelet[2952]: E0127 13:04:46.700890 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:04:47.018645 containerd[1636]: time="2026-01-27T13:04:47.018466274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:47.022740 containerd[1636]: time="2026-01-27T13:04:47.022660297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:47.023049 containerd[1636]: time="2026-01-27T13:04:47.022752689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 27 13:04:47.033633 kubelet[2952]: E0127 13:04:47.022975 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 13:04:47.035033 kubelet[2952]: E0127 13:04:47.034910 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 27 13:04:47.041321 kubelet[2952]: E0127 13:04:47.040994 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m72dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kpqqc_calico-system(edbc19c4-c5a2-4875-9a7b-5c829dca568c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:47.046067 kubelet[2952]: E0127 13:04:47.044578 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:04:47.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.66.190:22-68.220.241.50:56750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:47.465464 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 27 13:04:47.465998 kernel: audit: type=1130 audit(1769519087.458:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.66.190:22-68.220.241.50:56750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:47.460326 systemd[1]: Started sshd@22-10.230.66.190:22-68.220.241.50:56750.service - OpenSSH per-connection server daemon (68.220.241.50:56750). Jan 27 13:04:47.991000 audit[5534]: USER_ACCT pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:47.994420 sshd[5534]: Accepted publickey for core from 68.220.241.50 port 56750 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:47.998064 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:48.003618 kernel: audit: type=1101 audit(1769519087.991:887): pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:47.993000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.008909 kernel: audit: type=1103 audit(1769519087.993:888): pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.013712 kernel: audit: type=1006 audit(1769519087.994:889): pid=5534 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 27 13:04:48.013806 kernel: audit: type=1300 audit(1769519087.994:889): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff97f32610 a2=3 a3=0 items=0 ppid=1 pid=5534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:47.994000 audit[5534]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff97f32610 a2=3 a3=0 items=0 ppid=1 pid=5534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:48.016676 systemd-logind[1615]: New session 26 of user core. Jan 27 13:04:48.020609 kernel: audit: type=1327 audit(1769519087.994:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:47.994000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:48.024818 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 27 13:04:48.030000 audit[5534]: USER_START pid=5534 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.038547 kernel: audit: type=1105 audit(1769519088.030:890): pid=5534 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.039366 kernel: audit: type=1103 audit(1769519088.037:891): pid=5538 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.037000 audit[5538]: CRED_ACQ pid=5538 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.417331 sshd[5538]: Connection closed by 68.220.241.50 port 56750 Jan 27 13:04:48.418384 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:48.422000 audit[5534]: USER_END pid=5534 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.442747 kernel: audit: type=1106 audit(1769519088.422:892): pid=5534 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.442854 kernel: audit: type=1104 audit(1769519088.422:893): pid=5534 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.422000 audit[5534]: CRED_DISP pid=5534 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:48.438561 systemd[1]: sshd@22-10.230.66.190:22-68.220.241.50:56750.service: Deactivated successfully. Jan 27 13:04:48.446875 systemd[1]: session-26.scope: Deactivated successfully. Jan 27 13:04:48.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.66.190:22-68.220.241.50:56750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:48.452650 systemd-logind[1615]: Session 26 logged out. Waiting for processes to exit. Jan 27 13:04:48.457948 systemd-logind[1615]: Removed session 26. Jan 27 13:04:48.813000 audit[5550]: NETFILTER_CFG table=filter:154 family=2 entries=26 op=nft_register_rule pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:04:48.813000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcdfb1a360 a2=0 a3=7ffcdfb1a34c items=0 ppid=3095 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:48.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:04:48.819000 audit[5550]: NETFILTER_CFG table=nat:155 family=2 entries=104 op=nft_register_chain pid=5550 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 27 13:04:48.819000 audit[5550]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcdfb1a360 a2=0 a3=7ffcdfb1a34c items=0 ppid=3095 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:48.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 27 13:04:53.530549 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 27 13:04:53.530764 kernel: audit: type=1130 audit(1769519093.518:897): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.66.190:22-68.220.241.50:32918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:53.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.66.190:22-68.220.241.50:32918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:53.519919 systemd[1]: Started sshd@23-10.230.66.190:22-68.220.241.50:32918.service - OpenSSH per-connection server daemon (68.220.241.50:32918). Jan 27 13:04:54.123973 kernel: audit: type=1101 audit(1769519094.106:898): pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.106000 audit[5578]: USER_ACCT pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.126304 sshd[5578]: Accepted publickey for core from 68.220.241.50 port 32918 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:04:54.130556 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:04:54.127000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.137104 kernel: audit: type=1103 audit(1769519094.127:899): pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.137459 kernel: audit: type=1006 audit(1769519094.127:900): pid=5578 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 27 13:04:54.127000 audit[5578]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff863ec10 a2=3 a3=0 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:54.146997 kernel: audit: type=1300 audit(1769519094.127:900): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff863ec10 a2=3 a3=0 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:04:54.127000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:54.153272 kernel: audit: type=1327 audit(1769519094.127:900): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:04:54.157415 systemd-logind[1615]: New session 27 of user core. Jan 27 13:04:54.166774 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 27 13:04:54.180831 kernel: audit: type=1105 audit(1769519094.172:901): pid=5578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.172000 audit[5578]: USER_START pid=5578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.181000 audit[5582]: CRED_ACQ pid=5582 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.188616 kernel: audit: type=1103 audit(1769519094.181:902): pid=5582 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.562788 sshd[5582]: Connection closed by 68.220.241.50 port 32918 Jan 27 13:04:54.563266 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Jan 27 13:04:54.575375 kernel: audit: type=1106 audit(1769519094.565:903): pid=5578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.565000 audit[5578]: USER_END pid=5578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.576950 systemd[1]: sshd@23-10.230.66.190:22-68.220.241.50:32918.service: Deactivated successfully. Jan 27 13:04:54.565000 audit[5578]: CRED_DISP pid=5578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.594846 kernel: audit: type=1104 audit(1769519094.565:904): pid=5578 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:04:54.594382 systemd[1]: session-27.scope: Deactivated successfully. Jan 27 13:04:54.601730 systemd-logind[1615]: Session 27 logged out. Waiting for processes to exit. Jan 27 13:04:54.606467 systemd-logind[1615]: Removed session 27. Jan 27 13:04:54.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.66.190:22-68.220.241.50:32918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:55.701838 containerd[1636]: time="2026-01-27T13:04:55.700838264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 27 13:04:56.037548 containerd[1636]: time="2026-01-27T13:04:56.036457010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:56.038650 containerd[1636]: time="2026-01-27T13:04:56.038614447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:56.038849 containerd[1636]: time="2026-01-27T13:04:56.038717990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 27 13:04:56.039362 kubelet[2952]: E0127 13:04:56.039253 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 13:04:56.040182 kubelet[2952]: E0127 13:04:56.039410 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 27 13:04:56.040182 kubelet[2952]: E0127 13:04:56.039780 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfrmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cdccb5f6-wx4tb_calico-system(d20e3435-24a0-4d45-b1d0-2db2610f07b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:56.041461 kubelet[2952]: E0127 13:04:56.041139 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cdccb5f6-wx4tb" podUID="d20e3435-24a0-4d45-b1d0-2db2610f07b9" Jan 27 13:04:57.697457 containerd[1636]: time="2026-01-27T13:04:57.696997860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 13:04:58.045241 containerd[1636]: time="2026-01-27T13:04:58.044799607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:58.046576 containerd[1636]: time="2026-01-27T13:04:58.046537001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:58.046714 containerd[1636]: time="2026-01-27T13:04:58.046584246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 13:04:58.047169 kubelet[2952]: E0127 13:04:58.047017 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:04:58.048348 kubelet[2952]: E0127 13:04:58.047132 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:04:58.048786 kubelet[2952]: E0127 13:04:58.048676 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8xtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-675cb5c68f-lgkj9_calico-apiserver(49dcb295-61bb-47ac-9721-51e5abeacfeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:58.050023 kubelet[2952]: E0127 13:04:58.049946 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-lgkj9" podUID="49dcb295-61bb-47ac-9721-51e5abeacfeb" Jan 27 13:04:58.699122 containerd[1636]: time="2026-01-27T13:04:58.698984144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 27 13:04:58.701242 kubelet[2952]: E0127 13:04:58.698990 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kpqqc" podUID="edbc19c4-c5a2-4875-9a7b-5c829dca568c" Jan 27 13:04:59.047015 containerd[1636]: time="2026-01-27T13:04:59.046794932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:59.048578 containerd[1636]: time="2026-01-27T13:04:59.048532201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 27 13:04:59.048956 containerd[1636]: time="2026-01-27T13:04:59.048657633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:59.049829 kubelet[2952]: E0127 13:04:59.049745 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 13:04:59.050650 kubelet[2952]: E0127 13:04:59.050604 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 27 13:04:59.053331 kubelet[2952]: E0127 13:04:59.051928 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf79d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:59.054779 containerd[1636]: time="2026-01-27T13:04:59.053938567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 27 13:04:59.364910 containerd[1636]: time="2026-01-27T13:04:59.364732083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:59.367535 containerd[1636]: time="2026-01-27T13:04:59.367435010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 27 13:04:59.367731 containerd[1636]: time="2026-01-27T13:04:59.367460925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:59.367996 kubelet[2952]: E0127 13:04:59.367947 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:04:59.368169 kubelet[2952]: E0127 13:04:59.368142 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 27 13:04:59.369481 containerd[1636]: time="2026-01-27T13:04:59.368702858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 27 13:04:59.369599 kubelet[2952]: E0127 13:04:59.369356 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f16e2e6836fb414cba60abb811528f7a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jf8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dbc4fc484-72nqj_calico-system(7c2a8eb8-de11-4d21-a4ea-93f79258d54d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:59.671871 systemd[1]: Started sshd@24-10.230.66.190:22-68.220.241.50:32924.service - OpenSSH per-connection server daemon (68.220.241.50:32924). Jan 27 13:04:59.679967 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 27 13:04:59.680187 kernel: audit: type=1130 audit(1769519099.671:906): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.66.190:22-68.220.241.50:32924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:59.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.66.190:22-68.220.241.50:32924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:04:59.748544 containerd[1636]: time="2026-01-27T13:04:59.746506136Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:04:59.749713 containerd[1636]: time="2026-01-27T13:04:59.749648716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 27 13:04:59.750329 containerd[1636]: time="2026-01-27T13:04:59.749800661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 27 13:04:59.751683 kubelet[2952]: E0127 13:04:59.751624 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:04:59.751811 kubelet[2952]: E0127 13:04:59.751720 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 27 13:04:59.752297 kubelet[2952]: E0127 13:04:59.752230 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzkh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-675cb5c68f-6grht_calico-apiserver(31e32b52-76dd-4c4a-b037-c1818999e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 27 13:04:59.754533 containerd[1636]: time="2026-01-27T13:04:59.753750024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 27 13:04:59.754856 kubelet[2952]: E0127 13:04:59.754765 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-675cb5c68f-6grht" podUID="31e32b52-76dd-4c4a-b037-c1818999e71b" Jan 27 13:05:00.065041 containerd[1636]: time="2026-01-27T13:05:00.064494389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:05:00.066644 containerd[1636]: time="2026-01-27T13:05:00.066497026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 27 13:05:00.067075 containerd[1636]: time="2026-01-27T13:05:00.066896243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 27 13:05:00.070116 kubelet[2952]: E0127 13:05:00.070046 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 13:05:00.071227 kubelet[2952]: E0127 13:05:00.071143 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 27 13:05:00.075235 kubelet[2952]: E0127 13:05:00.074801 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf79d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gghgq_calico-system(8a652343-1e00-4d74-90a4-253edca0200b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 27 13:05:00.077457 kubelet[2952]: E0127 13:05:00.077094 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gghgq" podUID="8a652343-1e00-4d74-90a4-253edca0200b" Jan 27 13:05:00.078689 containerd[1636]: time="2026-01-27T13:05:00.077287218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 27 13:05:00.260000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.272745 kernel: audit: type=1101 audit(1769519100.260:907): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.274717 sshd[5607]: Accepted publickey for core from 68.220.241.50 port 32924 ssh2: RSA SHA256:qFbBUAskQWQI86KArR8ylc6GDTzNbvgxKCRcSDss1zc Jan 27 13:05:00.276000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.280833 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 27 13:05:00.284661 kernel: audit: type=1103 audit(1769519100.276:908): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.288556 kernel: audit: type=1006 audit(1769519100.277:909): pid=5607 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 27 13:05:00.277000 audit[5607]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc898be9c0 a2=3 a3=0 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:05:00.294540 kernel: audit: type=1300 audit(1769519100.277:909): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc898be9c0 a2=3 a3=0 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 27 13:05:00.294633 kernel: audit: type=1327 audit(1769519100.277:909): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:05:00.277000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 27 13:05:00.301285 systemd-logind[1615]: New session 28 of user core. Jan 27 13:05:00.309779 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 27 13:05:00.324546 kernel: audit: type=1105 audit(1769519100.317:910): pid=5607 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.317000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.320000 audit[5611]: CRED_ACQ pid=5611 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.331550 kernel: audit: type=1103 audit(1769519100.320:911): pid=5611 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.394078 containerd[1636]: time="2026-01-27T13:05:00.393766595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 27 13:05:00.395314 containerd[1636]: time="2026-01-27T13:05:00.395220406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 27 13:05:00.395645 containerd[1636]: time="2026-01-27T13:05:00.395230329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 27 13:05:00.397575 kubelet[2952]: E0127 13:05:00.396178 2952 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:05:00.397575 kubelet[2952]: E0127 13:05:00.396434 2952 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 27 13:05:00.397575 kubelet[2952]: E0127 13:05:00.396870 2952 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jf8nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7dbc4fc484-72nqj_calico-system(7c2a8eb8-de11-4d21-a4ea-93f79258d54d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 27 13:05:00.398295 kubelet[2952]: E0127 13:05:00.398225 2952 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dbc4fc484-72nqj" podUID="7c2a8eb8-de11-4d21-a4ea-93f79258d54d" Jan 27 13:05:00.829594 sshd[5611]: Connection closed by 68.220.241.50 port 32924 Jan 27 13:05:00.830747 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Jan 27 13:05:00.833000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.846050 kernel: audit: type=1106 audit(1769519100.833:912): pid=5607 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.834000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.848017 systemd[1]: sshd@24-10.230.66.190:22-68.220.241.50:32924.service: Deactivated successfully. Jan 27 13:05:00.852970 kernel: audit: type=1104 audit(1769519100.834:913): pid=5607 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 27 13:05:00.849882 systemd-logind[1615]: Session 28 logged out. Waiting for processes to exit. Jan 27 13:05:00.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.66.190:22-68.220.241.50:32924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 27 13:05:00.857780 systemd[1]: session-28.scope: Deactivated successfully. Jan 27 13:05:00.862806 systemd-logind[1615]: Removed session 28.